Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reshare confirmation #965

Merged
merged 31 commits into from
Jul 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
24de588
Add keyshare trigger
JesseAbram Jul 17, 2024
25f2ba1
tests
JesseAbram Jul 17, 2024
f8e462d
add parent key veryifying key onchain
JesseAbram Jul 18, 2024
3ee6614
add next signer to chain
JesseAbram Jul 18, 2024
ca6968b
get party ids
JesseAbram Jul 18, 2024
b137a55
get session info
JesseAbram Jul 18, 2024
87e2390
execute reshare protocol
JesseAbram Jul 19, 2024
9e6db22
prepare refresh tests
JesseAbram Jul 19, 2024
eba2a82
proactive refresh test
JesseAbram Jul 19, 2024
bff3238
add more info from chain
JesseAbram Jul 22, 2024
7ff8ba9
error handling
JesseAbram Jul 22, 2024
9d23d85
clean
JesseAbram Jul 22, 2024
c3fa564
clean
JesseAbram Jul 22, 2024
f11f95c
metadata
JesseAbram Jul 22, 2024
6957598
Merge branch 'master' of github.com:entropyxyz/entropy-core into key-…
JesseAbram Jul 22, 2024
3827de0
fix
JesseAbram Jul 22, 2024
41508bf
fix test
JesseAbram Jul 23, 2024
40751f9
Add confirm signer rotations
JesseAbram Jul 24, 2024
60da524
add events
JesseAbram Jul 24, 2024
0412852
add benchmarks
JesseAbram Jul 24, 2024
cae97fc
add in benchmark
JesseAbram Jul 24, 2024
17f8418
fix
JesseAbram Jul 24, 2024
003a452
Merge branch 'master' of github.com:entropyxyz/entropy-core into resh…
JesseAbram Jul 26, 2024
1ebc3a0
clean
JesseAbram Jul 26, 2024
0e41d56
fix benchmarks
JesseAbram Jul 26, 2024
4e30876
add confirm reshare in tss
JesseAbram Jul 29, 2024
2e2cfeb
add keyshare store and test
JesseAbram Jul 29, 2024
b059027
refund fees for confirm registered
JesseAbram Jul 29, 2024
a3c6dc8
dispatch class operational
JesseAbram Jul 29, 2024
91176a2
changelog
JesseAbram Jul 29, 2024
f1631d2
clean
JesseAbram Jul 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ At the moment this project **does not** adhere to
- Jumpstart network ([#918](https://github.com/entropyxyz/entropy-core/pull/918))
- Add Signer groups and rotation ([#938](https://github.com/entropyxyz/entropy-core/pull/938))
- Split jumpstart and register flows ([#952](https://github.com/entropyxyz/entropy-core/pull/952))
- Reshare confirmation ([#965](https://github.com/entropyxyz/entropy-core/pull/965))

## [0.2.0](https://github.com/entropyxyz/entropy-core/compare/release/v0.1.0...release/v0.2.0) - 2024-07-11

Expand Down
Binary file modified crates/client/entropy_metadata.scale
Binary file not shown.
44 changes: 40 additions & 4 deletions crates/threshold-signature-server/src/validator/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ use crate::{
get_signer_and_x25519_secret,
helpers::{
launch::FORBIDDEN_KEYS,
substrate::{get_stash_address, get_validators_info, query_chain},
substrate::{get_stash_address, get_validators_info, query_chain, submit_transaction},
},
signing_client::{protocol_transport::open_protocol_connections, ProtocolErr},
validator::errors::ValidatorErr,
Expand All @@ -40,7 +40,10 @@ use parity_scale_codec::{Decode, Encode};
use rand_core::OsRng;
use sp_core::Pair;
use std::{collections::BTreeSet, str::FromStr, time::Duration};
use subxt::{backend::legacy::LegacyRpcMethods, utils::AccountId32, OnlineClient};
use subxt::{
backend::legacy::LegacyRpcMethods, ext::sp_core::sr25519, tx::PairSigner, utils::AccountId32,
OnlineClient,
};
use synedrion::{
make_key_resharing_session, sessions::SessionId as SynedrionSessionId, KeyResharingInputs,
NewHolder, OldHolder,
Expand Down Expand Up @@ -200,12 +203,45 @@ pub async fn new_reshare(
.map_err(|_| ValidatorErr::ProtocolError("Error executing protocol".to_string()))?
.0
.ok_or(ValidatorErr::NoOutputFromReshareProtocol)?;
let _serialized_key_share = key_serialize(&new_key_share)
let serialized_key_share = key_serialize(&new_key_share)
.map_err(|_| ProtocolErr::KvSerialize("Kv Serialize Error".to_string()))?;
// TODO: do reshare call confirm_reshare (delete key when done) see #941
let network_parent_key = hex::encode(NETWORK_PARENT_KEY);
// TODO: should this be a two step process? see # https://github.com/entropyxyz/entropy-core/issues/968
if app_state.kv_store.kv().exists(&network_parent_key).await? {
app_state.kv_store.kv().delete(&network_parent_key).await?
};

let reservation = app_state.kv_store.kv().reserve_key(network_parent_key).await?;
app_state.kv_store.kv().put(reservation, serialized_key_share.clone()).await?;

// TODO: Error handling really complex needs to be thought about.
confirm_key_reshare(&api, &rpc, &signer).await?;
Ok(StatusCode::OK)
}

/// Confirms that a validator has succefully reshared.
pub async fn confirm_key_reshare(
api: &OnlineClient<EntropyConfig>,
rpc: &LegacyRpcMethods<EntropyConfig>,
signer: &PairSigner<EntropyConfig, sr25519::Pair>,
) -> Result<(), ValidatorErr> {
// TODO error handling + return error
// TODO fire and forget, or wait for in block maybe Ddos error
// TODO: Understand this better, potentially use sign_and_submit_default
// or other method under sign_and_*
let block_hash = rpc
.chain_get_block_hash(None)
.await?
.ok_or_else(|| ValidatorErr::OptionUnwrapError("Error getting block hash".to_string()))?;

let nonce_call = entropy::apis().account_nonce_api().account_nonce(signer.account_id().clone());
let nonce = api.runtime_api().at(block_hash).call(nonce_call).await?;

let confirm_key_reshare_request = entropy::tx().staking_extension().confirm_key_reshare();
submit_transaction(api, rpc, signer, &confirm_key_reshare_request, Some(nonce)).await?;
Ok(())
}

/// Validation for if an account can cover tx fees for a tx
pub async fn check_balance_for_fees(
api: &OnlineClient<EntropyConfig>,
Expand Down
28 changes: 19 additions & 9 deletions crates/threshold-signature-server/src/validator/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,16 @@ use crate::{
helpers::{
launch::{development_mnemonic, ValidatorName, FORBIDDEN_KEYS},
substrate::submit_transaction,
tests::initialize_test_logger,
tests::{initialize_test_logger, spawn_testing_validators, unsafe_get},
validator::get_signer_and_x25519_secret_from_mnemonic,
},
validator::errors::ValidatorErr,
};
use entropy_kvdb::clean_tests;
use entropy_shared::{OcwMessageReshare, EVE_VERIFYING_KEY, MIN_BALANCE};
use entropy_shared::{OcwMessageReshare, EVE_VERIFYING_KEY, MIN_BALANCE, NETWORK_PARENT_KEY};
use entropy_testing_utils::{
constants::{ALICE_STASH_ADDRESS, RANDOM_ACCOUNT},
spawn_testing_validators,
substrate_context::testing_context,
test_context_stationary,
substrate_context::{test_node_process_testing_state, testing_context},
};
use futures::future::join_all;
use parity_scale_codec::Encode;
Expand All @@ -50,20 +48,25 @@ async fn test_reshare() {

let alice = AccountKeyring::Alice;

let cxt = test_context_stationary().await;
let cxt = test_node_process_testing_state(true).await;
let (_validator_ips, _validator_ids) = spawn_testing_validators(true).await;
let api = get_api(&cxt.node_proc.ws_url).await.unwrap();
let rpc = get_rpc(&cxt.node_proc.ws_url).await.unwrap();
let validator_ports = vec![3001, 3002, 3003];
let api = get_api(&cxt.ws_url).await.unwrap();
let rpc = get_rpc(&cxt.ws_url).await.unwrap();

let client = reqwest::Client::new();
let block_number = rpc.chain_get_header(None).await.unwrap().unwrap().number + 1;
let mut key_shares_before = vec![];
for port in &validator_ports {
key_shares_before.push(unsafe_get(&client, hex::encode(NETWORK_PARENT_KEY), *port).await);
}

let onchain_reshare_request =
OcwMessageReshare { new_signer: alice.public().encode(), block_number };
setup_for_reshare(&api, &rpc).await;

let response_results = join_all(
vec![3001, 3002, 3003]
validator_ports
.iter()
.map(|port| {
client
Expand All @@ -77,6 +80,13 @@ async fn test_reshare() {
for response_result in response_results {
assert_eq!(response_result.unwrap().text().await.unwrap(), "");
}
for i in 0..validator_ports.len() {
assert_ne!(
key_shares_before[i],
unsafe_get(&client, hex::encode(NETWORK_PARENT_KEY), validator_ports[i]).await
);
}

clean_tests();
}

Expand Down
1 change: 1 addition & 0 deletions node/cli/src/chain_spec/dev.rs
Original file line number Diff line number Diff line change
Expand Up @@ -276,6 +276,7 @@ pub fn development_genesis_config(
})
.collect::<Vec<_>>(),
proactive_refresh_data: (vec![], vec![]),
mock_signer_rotate: false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im guessing this will be removed when this is all finished and we always rotate signers

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh no, this is for rotating on spin up, it is the only way I can get the tests in the TSS in the right state to actually test it (like the proactive refresh data above)

inital_signers: initial_authorities.iter().map(|auth| {
auth.0.clone()
})
Expand Down
1 change: 1 addition & 0 deletions node/cli/src/chain_spec/integration_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ pub fn integration_tests_genesis_config(
],
vec![EVE_VERIFYING_KEY.to_vec(), DAVE_VERIFYING_KEY.to_vec()],
),
mock_signer_rotate: true,
inital_signers: initial_authorities.iter().map(|auth| {
auth.0.clone()
})
Expand Down
1 change: 1 addition & 0 deletions node/cli/src/chain_spec/testnet.rs
Original file line number Diff line number Diff line change
Expand Up @@ -414,6 +414,7 @@ pub fn testnet_genesis_config(
})
.collect::<Vec<_>>(),
proactive_refresh_data: (vec![], vec![]),
mock_signer_rotate: false,
inital_signers: initial_authorities.iter().map(|auth| {
auth.0.clone()
})
Expand Down
1 change: 1 addition & 0 deletions pallets/propagation/src/mock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -372,6 +372,7 @@ pub fn new_test_ext() -> sp_io::TestExternalities {
(2, (4, NULL_ARR, vec![11])),
],
proactive_refresh_data: (vec![], vec![]),
mock_signer_rotate: false,
inital_signers: vec![5, 6],
};

Expand Down
2 changes: 1 addition & 1 deletion pallets/registry/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -608,7 +608,7 @@ pub mod pallet {
<T as Config>::WeightInfo::confirm_register_registering(pallet_session::Pallet::<T>::validators().len() as u32)
.max(<T as Config>::WeightInfo::confirm_register_registered(pallet_session::Pallet::<T>::validators().len() as u32))
.max(<T as Config>::WeightInfo::confirm_register_failed_registering(pallet_session::Pallet::<T>::validators().len() as u32));
(weight, Pays::No)
(weight, DispatchClass::Operational, Pays::No)
})]
pub fn confirm_register(
origin: OriginFor<T>,
Expand Down
1 change: 1 addition & 0 deletions pallets/registry/src/mock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -365,6 +365,7 @@ pub fn new_test_ext() -> sp_io::TestExternalities {
(7, (4, NULL_ARR, vec![50])),
],
proactive_refresh_data: (vec![], vec![]),
mock_signer_rotate: false,
inital_signers: vec![5, 6],
};

Expand Down
51 changes: 51 additions & 0 deletions pallets/staking/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@

//! Benchmarking setup for pallet-propgation
#![allow(unused_imports)]
use entropy_shared::SIGNING_PARTY_SIZE;
use frame_benchmarking::{account, benchmarks, impl_benchmark_test_suite, whitelisted_caller};
use frame_support::{
assert_ok, ensure,
Expand Down Expand Up @@ -175,6 +176,56 @@ benchmarks! {
verify {
assert_last_event::<T>(Event::<T>::ValidatorSyncStatus(validator_id_res, true).into());
}

confirm_key_reshare_confirmed {
let c in 0 .. SIGNING_PARTY_SIZE as u32;
// leave a space for two as not to rotate and only confirm rotation
let confirmation_num = c.checked_sub(2).unwrap_or(0);
let signer_num = SIGNING_PARTY_SIZE - 1;
let caller: T::AccountId = whitelisted_caller();
let validator_id_res = <T as pallet_session::Config>::ValidatorId::try_from(caller.clone()).or(Err(Error::<T>::InvalidValidatorId)).unwrap();
let second_signer: T::AccountId = account("second_signer", 0, SEED);
let second_signer_id = <T as pallet_session::Config>::ValidatorId::try_from(second_signer.clone()).or(Err(Error::<T>::InvalidValidatorId)).unwrap();
ThresholdToStash::<T>::insert(caller.clone(), validator_id_res.clone());

// full signer list leaving room for one extra validator
let mut signers = vec![second_signer_id.clone(); signer_num as usize];
signers.push(validator_id_res.clone());
Signers::<T>::put(signers.clone());

NextSigners::<T>::put(NextSignerInfo {
next_signers: signers,
confirmations: vec![second_signer_id.clone(); confirmation_num as usize],
});

}: confirm_key_reshare(RawOrigin::Signed(caller.clone()))
verify {
assert_last_event::<T>(Event::<T>::SignerConfirmed(validator_id_res).into());
}

confirm_key_reshare_completed {
// once less confirmation to always flip to rotate
let confirmation_num = SIGNING_PARTY_SIZE - 1;

let caller: T::AccountId = whitelisted_caller();
let validator_id_res = <T as pallet_session::Config>::ValidatorId::try_from(caller.clone()).or(Err(Error::<T>::InvalidValidatorId)).unwrap();
let second_signer: T::AccountId = account("second_signer", 0, SEED);
let second_signer_id = <T as pallet_session::Config>::ValidatorId::try_from(second_signer.clone()).or(Err(Error::<T>::InvalidValidatorId)).unwrap();
ThresholdToStash::<T>::insert(caller.clone(), validator_id_res.clone());
// full signer list leaving room for one extra validator
let mut signers = vec![second_signer_id.clone(); confirmation_num as usize];
signers.push(validator_id_res.clone());

Signers::<T>::put(signers.clone());
NextSigners::<T>::put(NextSignerInfo {
next_signers: signers.clone(),
confirmations: vec![second_signer_id; confirmation_num as usize],
});

}: confirm_key_reshare(RawOrigin::Signed(caller.clone()))
verify {
assert_last_event::<T>(Event::<T>::SignersRotation(signers.clone()).into());
}
}

impl_benchmark_test_suite!(Staking, crate::mock::new_test_ext(), crate::mock::Test);
71 changes: 67 additions & 4 deletions pallets/staking/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,9 @@ use sp_staking::SessionIndex;

#[frame_support::pallet]
pub mod pallet {
use entropy_shared::{ValidatorInfo, X25519PublicKey};
use entropy_shared::{ValidatorInfo, X25519PublicKey, SIGNING_PARTY_SIZE};
use frame_support::{
dispatch::DispatchResult,
dispatch::{DispatchResult, DispatchResultWithPostInfo},
pallet_prelude::*,
traits::{Currency, Randomness},
DefaultNoBound,
Expand All @@ -71,6 +71,7 @@ pub mod pallet {
};
use sp_runtime::traits::TrailingZeroInput;
use sp_staking::StakingAccount;
use sp_std::vec;
use sp_std::vec::Vec;

use super::*;
Expand Down Expand Up @@ -119,6 +120,12 @@ pub mod pallet {
pub new_signer: Vec<u8>,
pub block_number: BlockNumber,
}

#[derive(Clone, Encode, Decode, Eq, PartialEq, RuntimeDebug, TypeInfo)]
pub struct NextSignerInfo<ValidatorId> {
pub next_signers: Vec<ValidatorId>,
pub confirmations: Vec<ValidatorId>,
}
#[pallet::pallet]
#[pallet::without_storage_info]
pub struct Pallet<T>(_);
Expand Down Expand Up @@ -173,7 +180,7 @@ pub mod pallet {
/// The next signers ready to take the Signers place when a reshare is done
#[pallet::storage]
#[pallet::getter(fn next_signers)]
pub type NextSigners<T: Config> = StorageValue<_, Vec<T::ValidatorId>, ValueQuery>;
pub type NextSigners<T: Config> = StorageValue<_, NextSignerInfo<T::ValidatorId>, OptionQuery>;

/// The next time a reshare should happen
#[pallet::storage]
Expand All @@ -193,6 +200,8 @@ pub mod pallet {
pub inital_signers: Vec<T::ValidatorId>,
/// validator info and accounts to take part in proactive refresh
pub proactive_refresh_data: (Vec<ValidatorInfo>, Vec<Vec<u8>>),
/// validator info and account to take part in a reshare
pub mock_signer_rotate: bool,
}

#[pallet::genesis_build]
Expand Down Expand Up @@ -222,6 +231,13 @@ pub mod pallet {
proactive_refresh_keys: self.proactive_refresh_data.1.clone(),
};
ProactiveRefresh::<T>::put(refresh_info);

if self.mock_signer_rotate {
NextSigners::<T>::put(NextSignerInfo {
next_signers: self.inital_signers.clone(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is next_signers initially set to initial_signers just as a placeholder? Presumably it is going to get changed on the next new session.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pretty much if next signers do not exist then the chain isn't in a rotating state and when the nodes call confirm done they will get rejected

confirmations: vec![],
});
}
}
}
// Errors inform users that something went wrong.
Expand All @@ -234,6 +250,9 @@ pub mod pallet {
InvalidValidatorId,
SigningGroupError,
TssAccountAlreadyExists,
NotNextSigner,
ReshareNotInProgress,
AlreadyConfirmed,
}

#[pallet::event]
Expand All @@ -257,6 +276,10 @@ pub mod pallet {
Vec<Vec<<T as pallet_session::Config>::ValidatorId>>,
Vec<Vec<<T as pallet_session::Config>::ValidatorId>>,
),
/// Validators in new signer group [new_signers]
SignerConfirmed(<T as pallet_session::Config>::ValidatorId),
/// Validators subgroups rotated [old, new]
SignersRotation(Vec<<T as pallet_session::Config>::ValidatorId>),
}

#[pallet::call]
Expand Down Expand Up @@ -403,6 +426,43 @@ pub mod pallet {
Self::deposit_event(Event::ValidatorSyncStatus(stash, synced));
Ok(())
}

#[pallet::call_index(5)]
#[pallet::weight(({
<T as Config>::WeightInfo::confirm_key_reshare_confirmed(SIGNING_PARTY_SIZE as u32)
.max(<T as Config>::WeightInfo::confirm_key_reshare_completed())
}, DispatchClass::Operational))]
pub fn confirm_key_reshare(origin: OriginFor<T>) -> DispatchResultWithPostInfo {
let ts_server_account = ensure_signed(origin)?;
let validator_stash =
Self::threshold_to_stash(&ts_server_account).ok_or(Error::<T>::NoThresholdKey)?;

let mut signers_info =
NextSigners::<T>::take().ok_or(Error::<T>::ReshareNotInProgress)?;
ensure!(
signers_info.next_signers.contains(&validator_stash),
Error::<T>::NotNextSigner
);

ensure!(
!signers_info.confirmations.contains(&validator_stash),
Error::<T>::AlreadyConfirmed
);

// TODO (#927): Add another check, such as a signature or a verifying key comparison, to
// ensure that rotation was indeed successful.
let current_signer_length = signers_info.next_signers.len();
if signers_info.confirmations.len() == (current_signer_length - 1) {
Signers::<T>::put(signers_info.next_signers.clone());
Self::deposit_event(Event::SignersRotation(signers_info.next_signers));
Ok(Pays::No.into())
} else {
signers_info.confirmations.push(validator_stash.clone());
NextSigners::<T>::put(signers_info);
Self::deposit_event(Event::SignerConfirmed(validator_stash));
Ok(Pays::No.into())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes it a free transaction, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

free may not be the right term, refunded, as in they pay up front then if they don't hit any errors they get their fees back

}
}
}

impl<T: Config> Pallet<T> {
Expand Down Expand Up @@ -446,7 +506,10 @@ pub mod pallet {
// removes first signer and pushes new signer to back
current_signers.remove(0);
current_signers.push(next_signer_up.clone());
NextSigners::<T>::put(current_signers);
NextSigners::<T>::put(NextSignerInfo {
next_signers: current_signers,
confirmations: vec![],
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly related to this PR, but is it possible to test this with our 3 node integration test / docker-compose setup? I guess we need 4 nodes before the check above which checks that the validator size is greater than the number of current signers will pass.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes we would need an extra node added

// trigger reshare at next block
let current_block_number = <frame_system::Pallet<T>>::block_number();
let reshare_info = ReshareInfo {
Expand Down
Loading
Loading