Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IBC Rate limiting contract #2408

Merged
merged 31 commits into from
Aug 23, 2022
Merged

IBC Rate limiting contract #2408

merged 31 commits into from
Aug 23, 2022

Conversation

nicolaslara
Copy link
Contributor

Partial work for: #2305

What is the purpose of the change

Add the contract to be used for IBC rate limiting

Brief Changelog

(for example:)

  • The metadata is stored in the blob store on job creation time as a persistent artifact
  • Deployments RPC transmits only the blob storage reference
  • Daemons retrieve the RPC data from the blob cache

Testing and Verifying

  • There are unit tests and integration tests in the contract

Documentation and Release Note

  • Does this pull request introduce a new feature or user-facing behavior changes? (no)
  • Is a relevant changelog entry added to the Unreleased section in CHANGELOG.md? (no)
  • How is the feature or change documented? (not applicable)

@nicolaslara nicolaslara requested a review from a team August 15, 2022 07:30
@nicolaslara nicolaslara self-assigned this Aug 15, 2022
@github-actions github-actions bot added the T:CI label Aug 15, 2022
Copy link
Member

@p0mvn p0mvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really interesting, great work @nicolaslara . I'm still ramping up on the core concepts but had some initial set of questions

.github/workflows/contracts.yml Show resolved Hide resolved
x/ibc-rate-limit/.beaker/state.json Outdated Show resolved Hide resolved
x/ibc-rate-limit/.gitignore Outdated Show resolved Hide resolved
x/ibc-rate-limit/.gitignore Outdated Show resolved Hide resolved
return Err(ContractError::Unauthorized {});
}

let channels = CHANNEL_FLOWS.may_load(deps.storage, &channel_id)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

asking to learn - is CHANNEL_FLOWS a whitelist of some kind?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CHANNEL_FLOWS is the main state of this contract (

pub const CHANNEL_FLOWS: Map<&str, Vec<ChannelFlow>> = Map::new("flow");
).

It maps the channel id to its "flows" which are: how much value has moved through the channel during the currently active time period (this week, for example), and a "quota" which represents what percentage of the channel's value (balance+locked value on osmosis) we are allowing to flow through that channel in a specific duration (1w, for example)

pub period_end: Timestamp,
}

impl Flow {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we document what this is exactly please? I'm not very familiar with IBC concepts so it would be amazing having docs for some of these abstractions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes! a flow isn't really an IBC concept but something specific to rate limiting, but I'll add comments (and maybe a readme?) to document this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some docs to state.rs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a few of the names have changed, but aded doc comments everywhere.

@ValarDragon
Copy link
Member

ValarDragon commented Aug 16, 2022

Should we have the IBC interface functions be under Cosmwasm Sudo message? https://docs.cosmwasm.com/docs/1.0/smart-contracts/sudo/

(SendPacket, RecvPacket, AddChannel, CloseChannel)

@nicolaslara
Copy link
Contributor Author

Should we have the IBC interface functions be under Cosmwasm Sudo message? https://docs.cosmwasm.com/docs/1.0/smart-contracts/sudo/

(SendPacket, RecvPacket, AddChannel, CloseChannel)

The reason to add it as regular execute messages instead of sudo is that they can be executed by someone other than the chain. Right now we define who can execute them when we instantiate the contract (

/// Only this module can manage the contract
pub const GOVMODULE: Item<Addr> = Item::new("gov_module");
/// Only this module can execute transfers
pub const IBCMODULE: Item<Addr> = Item::new("ibc_module");
).

Permissions are then checked so that only the gov module is allowed to make changes and only the transfer module is allowed to send/recv packets.

My thought process is that even if these values will most likely be set to the module addresses, we could set them to something else (say, a multisig) if necessary.

@ValarDragon
Copy link
Member

That makes sense. Wdyt about send packet and recv packet being sudo though? I feel like we have an api leak if its called by something not in the state machine, right?

@nicolaslara
Copy link
Contributor Author

That makes sense. Wdyt about send packet and recv packet being sudo though? I feel like we have an api leak if its called by something not in the state machine, right?

yeah, that makes sense. I'll make the change.

nicolaslara and others added 4 commits August 17, 2022 09:01
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, JsonSchema, Copy)]
pub struct Flow {
// Q: Do we have edge case issues with inflow/outflow being u128, e.g. what if a token has super high precision.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Highlighting question

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a really good question. Internally, cosmwasm uses Uint128 (bank queries, for example): https://github.com/CosmWasm/cosmwasm/blob/main/packages/std/src/coins.rs#L10. So that's why I went with 128.

Their implementation is basically u128 with some helpers attached.

It should be easy to change to larger representations though. Cosmwasm has Uint256 and Uint512.

@@ -119,6 +122,8 @@ impl From<&QuotaMsg> for Quota {
}
}

// TODO: Add docs that this is the main tracker, we should be setting multiple of these per contract
// Q: Should we rename to "ChannelRateLimiter", and rename flow to "FlowTracker"?
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, JsonSchema)]
pub struct ChannelFlow {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about moving Channel_flows then channelFlow definition to the top of this file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually cosmwasm contracts have the state definition at the bottom of the state file, but we could move this around or maybe split the data structures into a separate file

Comment on lines 408 to 413
// Receiving 1% should fail. 3% already executed through the channel
let recv_msg = ExecuteMsg::RecvPacket {
channel_id: "channel".to_string(),
channel_value: 3_000,
funds: 30,
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels incorrect right? (Relative to what we want)

E.g. the 1% recv here should be going to -1% of what we had the prior week, rather than total net change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way it's currently implemented it's tracking only one balance. In that case, 3% has moved out of the chain, so if 1% were to move in, we're be at a 2% balance, so that's not allowed. The opposite would work: with quotas={send: 3%, recv: 2%), even if you've already sent 3%, you could receive 1% cause that would put the balance at 2%.

We could change the implementation to what you're describing though. This probably makes more sense if the quotas aren't going to be symetric. I think the "balance" implementation is a leftover from when I had the same quota for send and receive.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

going to -1% of what we had the prior week

Right now we're not tracking "what we had the previous week". Instead, the percentages are calculated based on the value of the channel at the time of the request. The current implementation actually allows more than the specified percentage (as the value grows/shrinks with every send) but the actual max converges pretty quickly for small percentages. The advantage is that we avoid tracking the the network value for each channel.

Happy to change the implementation here. I think both make sense

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just pushed a change so that this behaves as you expected. It's a bit less clean, as we need to consider both in and out flows, but closer to what a user would expect.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the change for "what we had the previous week" (caching the value in the quota on the first call or when it expires) in c2a72ad.

I find it a bit confusing that the differences in the quotas could be quite large when you have multiple quotas for a path (denom+channel). Consider, for example, if there is the supply was 100 last week, but someone trades 1M and wants to ibc 1%. Even if the daily 1% limit would allow you, the weekly still would prevent you from sending this through IBC.

Maybe this is what we want. We can have a quick discussion about the tradeoffs and pick one

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh thats a good point, that its super weird. Lets def discuss what makes most sense

Comment on lines 279 to 280
#[test]
fn consume_allowance() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add a test ensuring that quotas roll over as expected

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done implicitly here:

, but I could check the internal state to make sure the reset actually sets the values to zero.

The reason to have that in integration test is because we need to modify the time, so we need a mock chain

@ValarDragon
Copy link
Member

ValarDragon commented Aug 17, 2022

Left some comments with questions (some design questions, some minor code ones)

Awesome work on it!

I'm happy to merge this as long as we track these and write a spec, to unblock the other work going on for getting rate limits to happen!

@nicolaslara nicolaslara force-pushed the nicolas/ibc-rate-limit-contract branch from ce2b292 to c37a968 Compare August 18, 2022 12:16
@nicolaslara
Copy link
Contributor Author

Moved SendPacket and RecvPacket to sudo. There's a caveat worth noticing here though: when calling the contract via sudo, we don't specify a sender, which means that any module in the chain can call the contract. It's the responsibility of the go side to make sure this is not called in the wrong place.

Copy link
Member

@ValarDragon ValarDragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome job!

@ValarDragon
Copy link
Member

Lets merge once tests pass!

@ValarDragon ValarDragon merged commit 4fde976 into main Aug 23, 2022
@ValarDragon ValarDragon deleted the nicolas/ibc-rate-limit-contract branch August 23, 2022 00:02
@asalzmann
Copy link
Contributor

@nicolaslara we've been thinking about this problem as well, something I'm curious about - why key flows on (channel, denom)? Could a malicious actor create a new client-connection-channel and bridge assets off of the chain on a channel that's not rate-limited?

This is probably less relevant for assets that have been bridged onto the chain, but might be relevant for native assets that are minted and bridged off of the source chain.

@nicolaslara
Copy link
Contributor Author

That's an interesting question @asalzmann, and I think it's the same for native and bridged assets. If the assets get sent through a suspicious channel (and thus wrapped accordingly) they just wouldn't be recognized by the other chains.

Don't get me wrong, that's still an issue, as the assets would be have been burned on Osmosis, but in such a scenario we're probably dealing with a hack that has to be remedied within the chain anyway. The bigger danger (and what rate limiting is trying to prevent) is those assets flowing through "proper" channels, where they would be indistinguishable* from others of the same denom.

If we wanted to rate-limit every channel+denom, it could be done by adding them to the list when a channel or asset is created), but that's a bit cumbersome. An alternative would be to have a global rate limit: right now, if nothing is configured, the contract allows all transfers; that could be changed so that, if nothing is configured, the contract assumes a default rate limit (say, 5% daily).


*indistinguishable is probably too strong here, as the transaction history is still available, but once they start flowing through other chains the chance of a coordinated recovery strategy is low

@asalzmann
Copy link
Contributor

asalzmann commented Aug 23, 2022

Thanks for talking through your reasoning here, it's very helpful!

If the assets get sent through a suspicious channel (and thus wrapped accordingly) they just wouldn't be recognized by the other chains.

The main question I have is how users of other chains perceive the trust properties of vouchers originating from different channels. My mental model is that the trust properties between channels are equivalent and that the trust properties of vouchers are derived from clients, although it's very possible that mental model is wrong.

A practical case where this could matter would be a DEX that allows deposits of any IBC voucher from another chain that's one hop away, and routes the asset to the associated pool by querying the denomTrace (this might be a bad design for a DEX!) Global rate limiting could definitely work!

The bigger danger (and what rate limiting is trying to prevent) is those assets flowing through "proper" channels, where they would be indistinguishable* from others of the same denom.

Agreed! I like the approach.

@asalzmann
Copy link
Contributor

One other question @nicolaslara, what's the rationale for rate-limiting inflows?

@nicolaslara
Copy link
Contributor Author

The main question I have is how users of other chains perceive the trust properties of vouchers originating from different channels. My mental model is that the trust properties between channels are equivalent and that the trust properties of vouchers are derived from clients, although it's very possible that mental model is wrong.

A practical case where this could matter would be a DEX that allows deposits of any IBC voucher from another chain that's one hop away, and routes the asset to the associated pool by querying the denomTrace (this might be a bad design for a DEX!) Global rate limiting could definitely work!

I'd be interested to know if there are chains out there that do this. By default, IBC-go sets the denom to the trace hash, which effectively makes them different coins. A chain with this design would need to either override this, or have some mechanism for mixing them (a stableswap, for example).

I think this is a larger question of IBC routing/wrapping that is still unresolved. Not sure if there's an accepted design for this, but I'd expect unwrapping IBC denoms would require asking the issuing chains to validate them (haven't thought about this enough, though)

One other question @nicolaslara, what's the rationale for rate-limiting inflows?

In the case of an infinite-mint on another chain (think terra collapse) someone could bring unlimited amounts of a token and tank the price of assets pooled against it; effectively draining the pools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

6 participants