-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(cosmos): komodo-defi-proxy support #2173
Conversation
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Signed-off-by: onur-ozkan <work@onurozkan.dev>
… into cosmos-kdp-impl
Signed-off-by: onur-ozkan <work@onurozkan.dev>
Thanks!
1-3 seconds. |
I see, before this PR a different peer id was used on restart but now when restarting a previous connection was still established on the relay side with same peer id (due to restarting very quickly) if other_established == 0 {
// Ignore connections from blacklisted peers.
if self.blacklisted_peers.contains(&peer_id) {
debug!("Ignoring connection from blacklisted peer: {}", peer_id);
} else {
debug!("New peer connected: {}", peer_id);
if self.config.i_am_relay {
debug!("Sending IAmRelay to peer {:?}", peer_id);
let event = Rpc {
messages: Vec::new(),
subscriptions: Vec::new(),
control_msgs: vec![ControlAction::IAmRelay(true)],
};
self.notify_primary(peer_id, event); which causes the below log to never occur debug!("Completed IAmrelay handling for peer: {:?}", peer_id); I thought we shouldn't rely on this log in tests and use /// Repeatedly calls the `get_relay_mesh` RPC method until it returns a non-empty result or a timeout occurs.
/// This function is used to ensure that the relay mesh is populated before proceeding.
#[cfg(not(target_arch = "wasm32"))]
pub async fn check_seednodes(&mut self) -> Result<(), String> {
let timeout_sec = 22.0;
let start_time = now_float();
let delay_ms = 500;
loop {
let response = self.rpc(&json!({"userpass": self.userpass, "method": "get_relay_mesh"})).await?;
let relay_mesh: Json = json::from_str(&response.1).map_err(|e| ERRL!("{}", e))?;
if !relay_mesh["result"].as_array().unwrap_or(&vec![]).is_empty() {
return Ok(());
}
if now_float() - start_time > timeout_sec {
return Err(ERRL!("Timeout while waiting for relay mesh to be populated"));
}
Timer::sleep_ms(delay_ms).await;
}
} It passes after I add sleep here for 1 second block_on(mm_alice.wait_for_log(120., |log| log.contains(WATCHER_MESSAGE_SENT_LOG))).unwrap();
alice_conf.conf["dbdir"] = mm_alice.folder.join("DB").to_str().unwrap().into();
block_on(mm_alice.stop()).unwrap();
thread::sleep(Duration::from_secs(1));
let mut mm_alice = block_on(MarketMakerIt::start_with_envs(
alice_conf.conf,
alice_conf.rpc_password.clone(),
None,
&[],
))
.unwrap(); So this gives the relay time to close the old connection and avoid problems, and it sends
@smk762 @onur-ozkan I guess the above does this test but you can test again @smk762 |
That was one of the main purposes. We already have these lists on the proxy project and we also do rate limiting based on the peer address. Using dynamic peer addresses would be problematic on the proxy side, and it would also complicate debugging the network. To bypass the current proxy logic, you would need to run MM2 with different passphrases and generate signed messages for each request, which is quite expensive. We will be handling this case by checking the "is this peer part of the network?" RPC (which I will be implementing in the coming sprint). The goal is to put a heavy load (by requiring to run mm2 on the main network) on the suspect's machine so they cannot abuse our services without significantly exhausting their resources. |
I understand the purpose of this PR for the komodo proxy, what I meant by pubkey whitelisting/blacklisting is the Q3 roadmap item related to compliance :) |
6c55382
to
3d0c05e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I will add a commit that handles this comment #2173 (comment) as agreed with @onur-ozkan. This commit will need to checked and approved by someone else other than me.
For these test without this commit 3d0c05e
The relay/seednode allows another connection from the same E.g. In In real environment the below scenario can happen:
This change introduces timing-dependent behavior in our p2p network (for the rare case of using the same seed on 2 different devices at the same time):
These issues were not present when each light node used a random peer_id. Also, do we actually want to allow the same peer to connect multiple times to the p2p network but to a different set of relays if it has a different IP address? It's not a big deal as it's a very rare condition, but I am not sure why @onur-ozkan I think these tests should be ignored for now as the behaviour is undefined. If you agree, I can ignore these tests and push the fixes for the other tests that only needed a delay on restart, we can merge the PR after that and think about this case separately. What do you think? |
Thanks for the detailed debug report! I think we should revert this "persistent peer ID" logic for now, until we add the account locking functionality at the network level or until we add a "multi-ip - single-peer" support for out network (both can be done together with the next p2p breaking change). Otherwise users might run into this confusing issue and not know what's wrong. Even if they report it to the Komodo (e.g., via discord channels), it will be hard for us to figure out the real problem. |
I agree, you can revert the persistent peer ID, users will be able to reset their proxy usage stats by restarting the app but it's not that big of a problem and will be solved once we do persistent peer ID again. I think we will opt for a "multi-ip - single-peer" support but account locking at the network level is not bad too. The decision will be related to how we want to handle multi-device support, p2p account locking will simplify it a lot actually specially when/if we do cloud/IPFS data backup which means a single shared storage between all devices. |
Signed-off-by: onur-ozkan <work@onurozkan.dev>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥
@smk762 I believe this test is not required anymore as we reverted back to random p2p key / peer_id for light nodes. Please confirm this @onur-ozkan |
That's correct. |
if i try to enable with
i get this error
shouldn't komodo_proxy be optional? if i add
|
It should be optional, will fix it. Thanks. |
* dev: chore(RPCs): rename `get_peers_info` RPC to `get_directly_connected_peers` (KomodoPlatform#2195) chore(WASM-builds): remove `wasm-opt` overriding (KomodoPlatform#2200) fix(coins): add p2p feature to mm2_net dependency (KomodoPlatform#2210) chore(test): turn on debug assertion (KomodoPlatform#2204) feat(sia): extract sia lib to external repo (KomodoPlatform#2167) feat(eth-swap): eth tpu v2 methods, eth docker test enhancements (KomodoPlatform#2169) fix(cors): allow OPTIONS request to KDF server (KomodoPlatform#2191) docs(README): update commit badges to use dev branch (KomodoPlatform#2193) use default value for `komodo_proxy` (KomodoPlatform#2192) feat(cosmos): komodo-defi-proxy support (KomodoPlatform#2173)
* dev: chore(RPCs): rename `get_peers_info` RPC to `get_directly_connected_peers` (#2195) chore(WASM-builds): remove `wasm-opt` overriding (#2200) fix(coins): add p2p feature to mm2_net dependency (#2210) chore(test): turn on debug assertion (#2204) feat(sia): extract sia lib to external repo (#2167) feat(eth-swap): eth tpu v2 methods, eth docker test enhancements (#2169) fix(cors): allow OPTIONS request to KDF server (#2191) docs(README): update commit badges to use dev branch (#2193) use default value for `komodo_proxy` (#2192) feat(cosmos): komodo-defi-proxy support (#2173)
Mandatory Items:
Optional Items (will be handled in the next sprint to avoid blocking the Moon-Fi release):
Breaking Changes:
Tendermint activation payloads have been updated as follows:
Previously tendermint activations required
rpc_urls
field as a list of plain string values. This has now been changed tonodes
, which expects a list of the following json structure:All previous fields in RPC methods that controlled komodo-defi-proxy (e.g., in eth activations) have been updated to
komodo_proxy
.Docs Issue:
KomodoPlatform/komodo-docs-mdx#311