-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate proactive refresh endpoint #483
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
c82236f
to
7c24d51
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I don't really know what this proactive refresh feature is all about, can you give me a quick rundown? I think it would be helpful to have a bit more context here
if u32::from_be_bytes( | ||
last_block_number_recorded | ||
.try_into() | ||
.map_err(|_| ProtocolErr::Conversion("Block number conversion"))?, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This really shouldn't fail though, right? And what would a caller even be able to do in this case? My gut says that we panic here indicating a bug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
idk even though I agree this should never fail persay, I like keeping error handling consistent through the codebase, introducing panics is a code design change on error handling and idk should probably be talked about
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JesseAbram I wouldn't worry about keeping things consistent in this case.
The key question here is "what can the calling code do in case of an error"? In this case it seems like nothing because it's a bug (we stored something in the db that couldn't be deserialized into a block number), so it would be acceptable to panic instead of moving forward like there was something to handle
.await? | ||
.ok_or_else(|| ProtocolErr::OptionUnwrapError("Failed to get block number".to_string()))? | ||
.number; | ||
// prevents multiple repeated messages being sent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we checking for future block numbers here instead of simply checking for the existence for a key in the db?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HCastano I'm not sure what you mean. How would that prevent the same proactive refresh being triggered more than once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ameba23 so from my understanding it seems like this can only be triggered once, and we'd have a record of that in the DB. So could we not check that key directly instead of checking for some future block number?
But I guess this can be triggered multiple times, so it's fine?
@@ -29,8 +29,18 @@ pub enum KeyVisibility { | |||
} | |||
|
|||
/// Information from the validators in signing party | |||
#[cfg_attr(feature = "std", derive(Serialize, Deserialize))] | |||
#[derive(Clone, Encode, Decode, Debug, Eq, PartialEq, TypeInfo)] | |||
#[cfg(not(feature = "wasm"))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you change this from being std
to not(wasm)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HCastano this was probably the only thing which would compile.
For context, this crate is used in three different environments, which makes things complicated:
- The runtime uses it on the wasm platform without std
server
uses it natively with stdentropy-protocol
uses it with std on either native or wasm
frame-support
doesn't work on wasm with std
I double checked that entropy-protocol
will compile to wasm on this branch, and i am gonna add a check to CI for it.
@@ -36,21 +36,22 @@ use crate::{ | |||
AppState, | |||
}; | |||
|
|||
pub async fn setup_client() { | |||
pub async fn setup_client() -> KvManager { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So was there no way to access the kvdb
from the tests before?
I'm thinking there might be a better way to provide access than to pass this out from the setup
function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well I mean normally we don't need it and the kv manager can be accessed with our unsafe endpoints but that is way less clean and will bloat the code imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! (obviously pending CI passing)
Seems a shame there isn't a simpler way of checking that the HTTP request is made by the TSS server's corresponding chain node, as that is the only one who should be making these requests. But this works!
@@ -29,8 +29,18 @@ pub enum KeyVisibility { | |||
} | |||
|
|||
/// Information from the validators in signing party | |||
#[cfg_attr(feature = "std", derive(Serialize, Deserialize))] | |||
#[derive(Clone, Encode, Decode, Debug, Eq, PartialEq, TypeInfo)] | |||
#[cfg(not(feature = "wasm"))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HCastano this was probably the only thing which would compile.
For context, this crate is used in three different environments, which makes things complicated:
- The runtime uses it on the wasm platform without std
server
uses it natively with stdentropy-protocol
uses it with std on either native or wasm
frame-support
doesn't work on wasm with std
I double checked that entropy-protocol
will compile to wasm on this branch, and i am gonna add a check to CI for it.
.await? | ||
.ok_or_else(|| ProtocolErr::OptionUnwrapError("Failed to get block number".to_string()))? | ||
.number; | ||
// prevents multiple repeated messages being sent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HCastano I'm not sure what you mean. How would that prevent the same proactive refresh being triggered more than once?
crypto/server/src/user/api.rs
Outdated
@@ -544,8 +543,9 @@ pub async fn validate_new_user( | |||
if verifying_data_hash != chain_data_hash { | |||
return Err(UserErr::InvalidData); | |||
} | |||
kv_manager.kv().delete("LATEST_BLOCK_NUMBER").await?; | |||
let reservation = kv_manager.kv().reserve_key("LATEST_BLOCK_NUMBER".to_string()).await?; | |||
kv_manager.kv().delete("LATEST_BLOCK_NUMBER_NEW_USER").await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that reserve_key
will fail if the key exists. TBH i don't understand the reasoning behind having a reservation system rather than a put method which returns an error if the key already exists, as sled has some pretty good thread safety features out of the box. But thats not relevant to this PR.
pallets/propagation/src/lib.rs
Outdated
// We construct the request | ||
// important: the header->Content-Type must be added and match that of the receiving | ||
// party!! | ||
let pending = http::Request::post(url, vec![validators_info.encode()]) // scheint zu klappen | ||
let pending = http::Request::post(url, vec![req_body.encode()]) // scheint zu klappen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case anyone was wondering, this comment says 'seems to work', lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh this is really old lol, ill remove, long story im not gonna explain here
Co-authored-by: Hernando Castano <HCastano@users.noreply.github.com>
Co-authored-by: Hernando Castano <HCastano@users.noreply.github.com>
Co-authored-by: Hernando Castano <HCastano@users.noreply.github.com>
Co-authored-by: Hernando Castano <HCastano@users.noreply.github.com>
we can def look into having the TSS key maybe be held by the substrate node and sign off on messages, that would simplify this function but make the launching of a node harder and wouldn't really reduce computations much so idk if it is worth it |
It is a way to rotate keys in an MPC situation where the new keys still resolve to the same signature address, in entropy the goal is to have the network trigger this and to be done in the background |
@JesseAbram and why would we want to trigger this? What's the benefit of rotating keys here? And which keys are being rotated? |
The keys are the entropy shards "the 1 of n" key shard that you do the MPC with. This is because as the validator set rotates and old validators are booted for new validators, their key shards will be invalid |
* master: Integration test for validator joining network (#493) Add `ENTROPY_NODE` environment variable (#497) Change `SocketAddr` type for `String` (#496) Allow big protocol messages (#495) Add test for `EncryptedConnection` (#494) Use bincode rather than JSON for protocol and subscribe messages (#492) Remark on possible need to address TSS server from the Docker host. (#490) no proactive refresh on private key visibility (#485) Auxiliary data for program evaluation (#475) Validate proactive refresh endpoint (#483)
Adds validation to proactive refresh endpoint that makes sure messages are reflected on chain and are not repeated