From ce35b656b84d69d2b56f07e3b1e84d0c3279466f Mon Sep 17 00:00:00 2001 From: Stan Bondi Date: Wed, 19 Oct 2022 14:05:55 +0200 Subject: [PATCH] chore: merge development into feature-dan (#4815) * fix: batch rewind operations (#4752) Description --- Split rewind DbTx into smaller pieces. How Has This Been Tested? --- I did rewind on 20000+ (empty) blocks. * fix: fix config.toml bug (#4780) Description --- The base node errored when reading the `block_sync_trigger = 5` setting ``` ExitError { exit_code: ConfigError, details: Some("Invalid value for `base_node`: unknown field `block_sync_trigger`, expected one of `override_from`, `unconfirmed_pool`, `reorg_pool`, `service`") } ``` Motivation and Context --- Reading default config settings should not cause an error How Has This Been Tested? --- System level testing * fix(p2p/liveness): remove fallible unwrap (#4784) Description --- Removed stray unwrap in liveness service Motivation and Context --- Caused a base node to panic in stress test conditions. ``` thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: DhtOutboundError(RequesterReplyChannelClosed)', base_layer\p2p\src\services\liveness\service.rs:164:71 ``` How Has This Been Tested? --- Tests pass * fix(tari-script): use tari script encoding for execution stack serde de/serialization (#4791) Description --- - Uses tari script encoding (equivalent to consensus encoding) for `ExecutionStack` serde impl - Rename as_bytes to to_bytes as per rust convention. - adds migration to fix execution stack encoding in db Motivation and Context --- Resolves #4790 How Has This Been Tested? --- Added test to alert if breaking changes occur with serde serialization for execution stack. Manual testing in progress * feat: optimize transaction service queries (#4775) Description --- Transaction service sql db queries must handle `DieselError(DatabaseError(__Unknown, "database is locked"))`. This PR attempts to remove situations where that error may occur under highly busy async cirumstances, specifically: - Combine find and update/write type queries into one. - Add sql transactions around complex tasks. _**Note:** Partial resolution for #4731._ Motivation and Context --- See above. How Has This Been Tested? --- - Passed unit tests. - Passed cucumber tests. - ~~**TODO:**~~ System level tests under stress conditions. * feat: move nonce to first in sha hash (#4778) Description --- This moves the nonce to the front of the hashing order when hashing for the sha3 difficulty. This is done so that mining cannot cache part most the header and only load the nonce in. This forces the miner to hash the complete header each time the nonce chances. Motivation and Context --- Fixes: #4767 How Has This Been Tested? --- Unit tests all pass. * fix(dht): remove some invalid saf failure cases (#4787) Description --- - Ignores nanos for `stored_at` field in StoredMessages - Uses direct u32 <-> i32 conversion - Improve error message if attempting to store an expired message - Discard expired messages immediately - Debug log when remote client closes the connection in RPC server Motivation and Context --- - Nano conversion will fail when >= 2_000_000_000, nanos are not important to preserve so we ignore them (set to zero) - u32 to/from i32 conversion does not lose any data as both are 32-bit, only used as i32 in the database - 'The message was not valid for store and forward' occurs if the message has expired, this PR uses a more descriptive error message for this specific case. - Expired messages should be discarded immediately - Early close "errors" on the rpc server simply indicate that the client went away, which is expected and not something that the server controls, and so is logged at debug level How Has This Been Tested? --- Manually, * v0.38.6 * fix(core): only resize db if migration is required (#4792) Description --- Adds conditional to only increase database size if migration is required Motivation and Context --- A new database (cucumber, functional tests) has no inputs and so migration is not required. Ref #4791 How Has This Been Tested? --- * fix(miner): clippy error (#4793) Description --- Removes unused function in miner Motivation and Context --- Clippy How Has This Been Tested? --- No clippy error * test: remove cucumber tests, simplify others (#4794) Description --- * remove auto update tests from cucumber * rename some tests to be prefixed with `test_` * simplified two cucumber tests by removing steps Motivation and Context --- The auto update tests have an external dependency, which makes it hard to test reliably. They were marked as broken, so I rather removed them. There were two steps in the `list_height` and `list_headers` tests that created base nodes. Upon inspection of the logs, these base nodes never synced to the height of 5 and were not checked in the test, so were pretty useless and just slowed the test down How Has This Been Tested? --- npm test * v0.38.7 * feat: add deepsource config * fix(core): periodically commit large transaction in prune_to_height (#4805) * fix(comms/rpc): measures client-side latency to first message received (#4817) * fix(core): increase sync timeouts (#4800) Co-authored-by: Cayle Sharrock * feat: add multisig script that returns aggregate of signed public keys (#4742) Description --- Added an `m-of-n` multisig TariScript that returns the aggregate public key of the signatories if successful and fails otherwise. This is useful if the aggregate public key of the signatories is also the script public key, where signatories would work together to create an aggregate script signature using their individual script private keys. Motivation and Context --- To enhance the practicality of the `m-of-n` multisig TariScript. How Has This Been Tested? --- Unit tests Co-Authored-By: SW van Heerden swvheerden@gmail.com * feat(comms): adds periodic socket-level liveness checks (#4819) Description --- - adds socket-level liveness checks - adds configuration to enable liveness checks (currently enabled by default in base node, disabled in wallet) - update status line to display liveness status Motivation and Context --- Allows us to gain visibility on the base latency of the transport without including overhead of the noise socket and yamux How Has This Been Tested? --- Manually * fix(core): dont request full non-tip block if block is empty (#4802) Description --- - checks for edge-case which prevents an unnecessary full candidate block request when block is empty. Motivation and Context --- A full block request for empty block is not necessary as we already have all the information required to construct the candidate block. This check was missing from the branch where the candidate block is not the next tip block. How Has This Been Tested? --- Co-authored-by: Martin Stefcek <35243812+Cifko@users.noreply.github.com> Co-authored-by: Hansie Odendaal <39146854+hansieodendaal@users.noreply.github.com> Co-authored-by: SW van Heerden Co-authored-by: stringhandler Co-authored-by: CjS77 --- .circleci/config.yml | 3 + .deepsource.toml | 10 + Cargo.lock | 46 +- applications/tari_app_grpc/Cargo.toml | 2 +- .../src/conversions/transaction_input.rs | 4 +- .../src/conversions/transaction_output.rs | 2 +- .../src/conversions/unblinded_output.rs | 4 +- applications/tari_app_utilities/Cargo.toml | 2 +- applications/tari_base_node/Cargo.toml | 2 +- applications/tari_base_node/src/bootstrap.rs | 8 +- .../src/commands/command/add_peer.rs | 5 +- .../src/commands/command/ban_peer.rs | 5 +- .../src/commands/command/dial_peer.rs | 2 +- .../src/commands/command/get_peer.rs | 6 +- .../src/commands/command/list_connections.rs | 6 +- .../src/commands/command/list_peers.rs | 2 +- .../src/commands/command/mod.rs | 12 +- .../commands/command/reset_offline_peers.rs | 3 +- .../src/commands/command/status.rs | 17 +- .../src/commands/command/unban_all_peers.rs | 5 +- .../src/commands/status_line.rs | 6 +- .../src/grpc/base_node_grpc_server.rs | 4 +- applications/tari_console_wallet/Cargo.toml | 2 +- .../tari_merge_mining_proxy/Cargo.toml | 2 +- applications/tari_miner/Cargo.toml | 2 +- applications/tari_miner/src/difficulty.rs | 34 +- base_layer/common_types/Cargo.toml | 2 +- base_layer/core/Cargo.toml | 2 +- .../comms_interface/inbound_handlers.rs | 7 + base_layer/core/src/base_node/sync/config.rs | 4 +- .../src/chain_storage/blockchain_database.rs | 73 +- .../core/src/chain_storage/lmdb_db/lmdb.rs | 48 + .../core/src/chain_storage/lmdb_db/lmdb_db.rs | 131 ++- .../core/src/consensus/consensus_constants.rs | 77 +- .../consensus/consensus_encoding/script.rs | 4 +- base_layer/core/src/proof_of_work/sha3_pow.rs | 19 +- base_layer/core/src/proto/transaction.rs | 8 +- .../transaction_input.rs | 8 +- .../proto/transaction_sender.rs | 2 +- base_layer/core/tests/block_validation.rs | 1 + .../chain_storage_tests/chain_backend.rs | 4 +- .../chain_storage_tests/chain_storage.rs | 80 +- base_layer/key_manager/Cargo.toml | 2 +- base_layer/mmr/Cargo.toml | 2 +- base_layer/p2p/Cargo.toml | 2 +- base_layer/p2p/src/config.rs | 10 +- base_layer/p2p/src/initialization.rs | 3 +- .../p2p/src/services/liveness/service.rs | 2 +- base_layer/service_framework/Cargo.toml | 2 +- base_layer/tari_mining_helper_ffi/Cargo.toml | 2 +- base_layer/wallet/Cargo.toml | 2 +- base_layer/wallet/src/config.rs | 1 + .../storage/sqlite_db/mod.rs | 217 +++-- .../storage/sqlite_db/new_output_sql.rs | 4 +- .../transaction_service/storage/database.rs | 4 +- .../transaction_service/storage/sqlite_db.rs | 891 +++++++++++------- base_layer/wallet/tests/contacts_service.rs | 1 + base_layer/wallet/tests/wallet.rs | 2 + base_layer/wallet_ffi/Cargo.toml | 2 +- base_layer/wallet_ffi/src/lib.rs | 1 + changelog.md | 48 + common/Cargo.toml | 2 +- common/config/presets/c_base_node.toml | 4 +- common/config/presets/d_console_wallet.toml | 2 + common_sqlite/Cargo.toml | 2 +- comms/core/Cargo.toml | 2 +- comms/core/src/builder/comms_node.rs | 13 +- comms/core/src/builder/mod.rs | 6 + comms/core/src/connection_manager/dialer.rs | 49 +- comms/core/src/connection_manager/listener.rs | 41 +- comms/core/src/connection_manager/liveness.rs | 133 ++- comms/core/src/connection_manager/manager.rs | 46 +- comms/core/src/connection_manager/mod.rs | 2 + .../tests/listener_dialer.rs | 6 +- .../core/src/connection_manager/wire_mode.rs | 12 +- comms/core/src/protocol/identity.rs | 8 +- comms/core/src/protocol/rpc/client/mod.rs | 25 +- comms/core/src/protocol/rpc/server/error.rs | 13 +- comms/core/src/protocol/rpc/server/mod.rs | 16 +- comms/core/src/test_utils/transport.rs | 4 +- comms/core/src/tor/control_client/client.rs | 4 +- comms/core/src/transports/dns/mod.rs | 1 + comms/core/src/transports/dns/tor.rs | 4 +- comms/core/src/transports/memory.rs | 16 +- comms/core/src/transports/mod.rs | 4 +- comms/core/src/transports/socks.rs | 18 +- comms/core/src/transports/tcp.rs | 8 +- comms/core/src/transports/tcp_with_tor.rs | 6 +- comms/dht/Cargo.toml | 2 +- comms/dht/src/dht.rs | 16 + comms/dht/src/envelope.rs | 2 +- .../store_forward/database/stored_message.rs | 10 +- comms/dht/src/store_forward/error.rs | 10 +- comms/dht/src/store_forward/message.rs | 7 +- .../dht/src/store_forward/saf_handler/task.rs | 17 +- comms/dht/src/store_forward/store.rs | 8 +- comms/rpc_macros/Cargo.toml | 2 +- infrastructure/derive/Cargo.toml | 2 +- infrastructure/shutdown/Cargo.toml | 2 +- infrastructure/storage/Cargo.toml | 2 +- infrastructure/storage/tests/lmdb.rs | 16 +- infrastructure/tari_script/src/lib.rs | 2 +- infrastructure/tari_script/src/op_codes.rs | 36 +- infrastructure/tari_script/src/script.rs | 80 +- infrastructure/tari_script/src/serde.rs | 107 ++- infrastructure/tari_script/src/stack.rs | 60 +- infrastructure/test_utils/Cargo.toml | 2 +- integration_tests/config/config.toml | 380 -------- integration_tests/cucumber.js | 5 +- .../features/BaseNodeAutoUpdate.feature | 15 - .../features/BaseNodeConnectivity.feature | 6 +- .../features/WalletAutoUpdate.feature | 15 - integration_tests/helpers/config.js | 7 +- integration_tests/package-lock.json | 74 ++ package-lock.json | 2 +- 115 files changed, 1881 insertions(+), 1317 deletions(-) create mode 100644 .deepsource.toml delete mode 100644 integration_tests/config/config.toml delete mode 100644 integration_tests/features/BaseNodeAutoUpdate.feature delete mode 100644 integration_tests/features/WalletAutoUpdate.feature diff --git a/.circleci/config.yml b/.circleci/config.yml index 41bbab02f5..3161fe743a 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -31,6 +31,9 @@ commands: - run: name: Build miner command: cargo build --release --bin tari_miner + - run: + name: Build wallet FFI + command: cargo build --release --package tari_wallet_ffi - run: name: Run cucumber scenarios no_output_timeout: 20m diff --git a/.deepsource.toml b/.deepsource.toml new file mode 100644 index 0000000000..7219beb3ba --- /dev/null +++ b/.deepsource.toml @@ -0,0 +1,10 @@ +version = 1 + + +[[analyzers]] +name = "rust" +enabled = true + + [analyzers.meta] + msrv = "stable" + diff --git a/Cargo.lock b/Cargo.lock index 85179a2a51..af22725f8c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4594,7 +4594,7 @@ dependencies = [ [[package]] name = "tari_app_grpc" -version = "0.38.5" +version = "0.38.7" dependencies = [ "argon2 0.4.1", "base64 0.13.0", @@ -4619,7 +4619,7 @@ dependencies = [ [[package]] name = "tari_app_utilities" -version = "0.38.5" +version = "0.38.7" dependencies = [ "clap 3.2.22", "config", @@ -4641,7 +4641,7 @@ dependencies = [ [[package]] name = "tari_base_node" -version = "0.38.5" +version = "0.38.7" dependencies = [ "anyhow", "async-trait", @@ -4742,7 +4742,7 @@ dependencies = [ [[package]] name = "tari_common" -version = "0.38.5" +version = "0.38.7" dependencies = [ "anyhow", "blake2 0.9.2", @@ -4770,7 +4770,7 @@ dependencies = [ [[package]] name = "tari_common_sqlite" -version = "0.38.5" +version = "0.38.7" dependencies = [ "diesel", "log", @@ -4779,7 +4779,7 @@ dependencies = [ [[package]] name = "tari_common_types" -version = "0.38.5" +version = "0.38.7" dependencies = [ "base64 0.13.0", "digest 0.9.0", @@ -4795,7 +4795,7 @@ dependencies = [ [[package]] name = "tari_comms" -version = "0.38.5" +version = "0.38.7" dependencies = [ "anyhow", "async-trait", @@ -4845,7 +4845,7 @@ dependencies = [ [[package]] name = "tari_comms_dht" -version = "0.38.5" +version = "0.38.7" dependencies = [ "anyhow", "bitflags 1.3.2", @@ -4891,7 +4891,7 @@ dependencies = [ [[package]] name = "tari_comms_rpc_macros" -version = "0.38.5" +version = "0.38.7" dependencies = [ "futures 0.3.24", "proc-macro2", @@ -4906,7 +4906,7 @@ dependencies = [ [[package]] name = "tari_console_wallet" -version = "0.38.5" +version = "0.38.7" dependencies = [ "base64 0.13.0", "bitflags 1.3.2", @@ -4956,7 +4956,7 @@ dependencies = [ [[package]] name = "tari_core" -version = "0.38.5" +version = "0.38.7" dependencies = [ "async-trait", "bincode", @@ -5044,7 +5044,7 @@ dependencies = [ [[package]] name = "tari_key_manager" -version = "0.38.5" +version = "0.38.7" dependencies = [ "argon2 0.2.4", "arrayvec 0.7.2", @@ -5091,7 +5091,7 @@ dependencies = [ [[package]] name = "tari_merge_mining_proxy" -version = "0.38.5" +version = "0.38.7" dependencies = [ "anyhow", "bincode", @@ -5143,7 +5143,7 @@ dependencies = [ [[package]] name = "tari_miner" -version = "0.38.5" +version = "0.38.7" dependencies = [ "base64 0.13.0", "bufstream", @@ -5179,7 +5179,7 @@ dependencies = [ [[package]] name = "tari_mining_helper_ffi" -version = "0.38.5" +version = "0.38.7" dependencies = [ "hex", "libc", @@ -5196,7 +5196,7 @@ dependencies = [ [[package]] name = "tari_mmr" -version = "0.38.5" +version = "0.38.7" dependencies = [ "bincode", "blake2 0.9.2", @@ -5215,7 +5215,7 @@ dependencies = [ [[package]] name = "tari_p2p" -version = "0.38.5" +version = "0.38.7" dependencies = [ "anyhow", "bytes 0.5.6", @@ -5272,7 +5272,7 @@ dependencies = [ [[package]] name = "tari_service_framework" -version = "0.38.5" +version = "0.38.7" dependencies = [ "anyhow", "async-trait", @@ -5289,7 +5289,7 @@ dependencies = [ [[package]] name = "tari_shutdown" -version = "0.38.5" +version = "0.38.7" dependencies = [ "futures 0.3.24", "tokio", @@ -5297,7 +5297,7 @@ dependencies = [ [[package]] name = "tari_storage" -version = "0.38.5" +version = "0.38.7" dependencies = [ "bincode", "lmdb-zero", @@ -5311,7 +5311,7 @@ dependencies = [ [[package]] name = "tari_test_utils" -version = "0.38.5" +version = "0.38.7" dependencies = [ "futures 0.3.24", "futures-test", @@ -5338,7 +5338,7 @@ dependencies = [ [[package]] name = "tari_wallet" -version = "0.38.5" +version = "0.38.7" dependencies = [ "argon2 0.2.4", "async-trait", @@ -5389,7 +5389,7 @@ dependencies = [ [[package]] name = "tari_wallet_ffi" -version = "0.38.5" +version = "0.38.7" dependencies = [ "cbindgen 0.24.3", "chrono", diff --git a/applications/tari_app_grpc/Cargo.toml b/applications/tari_app_grpc/Cargo.toml index 002eefa59c..25fbb05cbd 100644 --- a/applications/tari_app_grpc/Cargo.toml +++ b/applications/tari_app_grpc/Cargo.toml @@ -4,7 +4,7 @@ authors = ["The Tari Development Community"] description = "This crate is to provide a single source for all cross application grpc files and conversions to and from tari::core" repository = "https://github.com/tari-project/tari" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/applications/tari_app_grpc/src/conversions/transaction_input.rs b/applications/tari_app_grpc/src/conversions/transaction_input.rs index 0a42d1d61d..a4e346728f 100644 --- a/applications/tari_app_grpc/src/conversions/transaction_input.rs +++ b/applications/tari_app_grpc/src/conversions/transaction_input.rs @@ -119,8 +119,8 @@ impl TryFrom for grpc::TransactionInput { script: input .script() .map_err(|_| "Non-compact Transaction input should contain script".to_string())? - .as_bytes(), - input_data: input.input_data.as_bytes(), + .to_bytes(), + input_data: input.input_data.to_bytes(), script_signature, sender_offset_public_key: input .sender_offset_public_key() diff --git a/applications/tari_app_grpc/src/conversions/transaction_output.rs b/applications/tari_app_grpc/src/conversions/transaction_output.rs index af9afd989c..8a037d8ef6 100644 --- a/applications/tari_app_grpc/src/conversions/transaction_output.rs +++ b/applications/tari_app_grpc/src/conversions/transaction_output.rs @@ -85,7 +85,7 @@ impl From for grpc::TransactionOutput { features: Some(output.features.into()), commitment: Vec::from(output.commitment.as_bytes()), range_proof: Vec::from(output.proof.as_bytes()), - script: output.script.as_bytes(), + script: output.script.to_bytes(), sender_offset_public_key: output.sender_offset_public_key.as_bytes().to_vec(), metadata_signature: Some(grpc::ComSignature { public_nonce_commitment: Vec::from(output.metadata_signature.public_nonce().as_bytes()), diff --git a/applications/tari_app_grpc/src/conversions/unblinded_output.rs b/applications/tari_app_grpc/src/conversions/unblinded_output.rs index d49153c35b..18c8dab78c 100644 --- a/applications/tari_app_grpc/src/conversions/unblinded_output.rs +++ b/applications/tari_app_grpc/src/conversions/unblinded_output.rs @@ -41,8 +41,8 @@ impl From for grpc::UnblindedOutput { value: u64::from(output.value), spending_key: output.spending_key.as_bytes().to_vec(), features: Some(output.features.into()), - script: output.script.as_bytes(), - input_data: output.input_data.as_bytes(), + script: output.script.to_bytes(), + input_data: output.input_data.to_bytes(), script_private_key: output.script_private_key.as_bytes().to_vec(), sender_offset_public_key: output.sender_offset_public_key.as_bytes().to_vec(), metadata_signature: Some(grpc::ComSignature { diff --git a/applications/tari_app_utilities/Cargo.toml b/applications/tari_app_utilities/Cargo.toml index 3ada64c646..4eca3252b4 100644 --- a/applications/tari_app_utilities/Cargo.toml +++ b/applications/tari_app_utilities/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "tari_app_utilities" -version = "0.38.5" +version = "0.38.7" authors = ["The Tari Development Community"] edition = "2018" license = "BSD-3-Clause" diff --git a/applications/tari_base_node/Cargo.toml b/applications/tari_base_node/Cargo.toml index 2ff991ef58..cd717a4cc0 100644 --- a/applications/tari_base_node/Cargo.toml +++ b/applications/tari_base_node/Cargo.toml @@ -4,7 +4,7 @@ authors = ["The Tari Development Community"] description = "The tari full base node implementation" repository = "https://github.com/tari-project/tari" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/applications/tari_base_node/src/bootstrap.rs b/applications/tari_base_node/src/bootstrap.rs index 97d1c24643..d2975aaa22 100644 --- a/applications/tari_base_node/src/bootstrap.rs +++ b/applications/tari_base_node/src/bootstrap.rs @@ -20,7 +20,7 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use std::{cmp, str::FromStr, sync::Arc}; +use std::{cmp, str::FromStr, sync::Arc, time::Duration}; use log::*; use tari_app_utilities::{consts, identity_management, identity_management::load_from_json}; @@ -106,6 +106,12 @@ where B: BlockchainBackend + 'static .map_err(|e| ExitError::new(ExitCode::ConfigError, e))?; p2p_config.transport.tor.identity = tor_identity; + // TODO: This should probably be disabled in future and have it optionally set/unset in the config - this check + // does allow MITM/ISP/tor router to connect this node's IP to a destination IP/onion address. + // Specifically, "pingpong" text is periodically sent on an unencrypted socket allowing anyone observing + // the traffic to recognise the sending IP address as almost certainly a tari node. + p2p_config.listener_liveness_check_interval = Some(Duration::from_secs(15)); + let mut handles = StackBuilder::new(self.interrupt_signal) .add_initializer(P2pInitializer::new( p2p_config.clone(), diff --git a/applications/tari_base_node/src/commands/command/add_peer.rs b/applications/tari_base_node/src/commands/command/add_peer.rs index 50cf190716..f5c2a74bd7 100644 --- a/applications/tari_base_node/src/commands/command/add_peer.rs +++ b/applications/tari_base_node/src/commands/command/add_peer.rs @@ -44,7 +44,8 @@ pub struct ArgsAddPeer { impl HandleCommand for CommandContext { async fn handle_command(&mut self, args: ArgsAddPeer) -> Result<(), Error> { let public_key = args.public_key.into(); - if self.peer_manager.exists(&public_key).await { + let peer_manager = self.comms.peer_manager(); + if peer_manager.exists(&public_key).await { return Err(anyhow!("Peer with public key '{}' already exists", public_key)); } let node_id = NodeId::from_public_key(&public_key); @@ -57,7 +58,7 @@ impl HandleCommand for CommandContext { vec![], String::new(), ); - self.peer_manager.add_peer(peer).await?; + peer_manager.add_peer(peer).await?; println!("Peer with node id '{}'was added to the base node.", node_id); Ok(()) } diff --git a/applications/tari_base_node/src/commands/command/ban_peer.rs b/applications/tari_base_node/src/commands/command/ban_peer.rs index 7de10c7e33..0b13742740 100644 --- a/applications/tari_base_node/src/commands/command/ban_peer.rs +++ b/applications/tari_base_node/src/commands/command/ban_peer.rs @@ -80,13 +80,14 @@ impl CommandContext { if self.base_node_identity.node_id() == &node_id { Err(ArgsError::BanSelf.into()) } else if must_ban { - self.connectivity + self.comms + .connectivity() .ban_peer_until(node_id.clone(), duration, "UI manual ban".to_string()) .await?; println!("Peer was banned in base node."); Ok(()) } else { - self.peer_manager.unban_peer(&node_id).await?; + self.comms.peer_manager().unban_peer(&node_id).await?; println!("Peer ban was removed from base node."); Ok(()) } diff --git a/applications/tari_base_node/src/commands/command/dial_peer.rs b/applications/tari_base_node/src/commands/command/dial_peer.rs index d7dcbd8815..b808c936e3 100644 --- a/applications/tari_base_node/src/commands/command/dial_peer.rs +++ b/applications/tari_base_node/src/commands/command/dial_peer.rs @@ -48,7 +48,7 @@ impl HandleCommand for CommandContext { impl CommandContext { /// Function to process the dial-peer command pub async fn dial_peer(&self, dest_node_id: NodeId) -> Result<(), Error> { - let connectivity = self.connectivity.clone(); + let connectivity = self.comms.connectivity(); task::spawn(async move { let start = Instant::now(); println!("☎️ Dialing peer..."); diff --git a/applications/tari_base_node/src/commands/command/get_peer.rs b/applications/tari_base_node/src/commands/command/get_peer.rs index 91c78d114f..545bfc2748 100644 --- a/applications/tari_base_node/src/commands/command/get_peer.rs +++ b/applications/tari_base_node/src/commands/command/get_peer.rs @@ -63,7 +63,8 @@ enum ArgsError { impl CommandContext { pub async fn get_peer(&self, partial: Vec, original_str: String) -> Result<(), Error> { - let peers = self.peer_manager.find_all_starts_with(&partial).await?; + let peer_manager = self.comms.peer_manager(); + let peers = peer_manager.find_all_starts_with(&partial).await?; let peer = { if let Some(peer) = peers.into_iter().next() { peer @@ -71,8 +72,7 @@ impl CommandContext { let pk = parse_emoji_id_or_public_key(&original_str).ok_or_else(|| ArgsError::NoPeerMatching { original_str: original_str.clone(), })?; - let peer = self - .peer_manager + let peer = peer_manager .find_by_public_key(&pk) .await? .ok_or(ArgsError::NoPeerMatching { original_str })?; diff --git a/applications/tari_base_node/src/commands/command/list_connections.rs b/applications/tari_base_node/src/commands/command/list_connections.rs index dcef31f483..8771123457 100644 --- a/applications/tari_base_node/src/commands/command/list_connections.rs +++ b/applications/tari_base_node/src/commands/command/list_connections.rs @@ -53,9 +53,9 @@ impl CommandContext { "User Agent", "Info", ]); + let peer_manager = self.comms.peer_manager(); for conn in conns { - let peer = self - .peer_manager + let peer = peer_manager .find_by_node_id(conn.peer_node_id()) .await .expect("Unexpected peer database error") @@ -105,7 +105,7 @@ impl CommandContext { impl CommandContext { /// Function to process the list-connections command pub async fn list_connections(&mut self) -> Result<(), Error> { - let conns = self.connectivity.get_active_connections().await?; + let conns = self.comms.connectivity().get_active_connections().await?; let (mut nodes, mut clients) = conns .into_iter() .partition::, _>(|a| a.peer_features().is_node()); diff --git a/applications/tari_base_node/src/commands/command/list_peers.rs b/applications/tari_base_node/src/commands/command/list_peers.rs index 7587b28e3e..bb7ea82cf3 100644 --- a/applications/tari_base_node/src/commands/command/list_peers.rs +++ b/applications/tari_base_node/src/commands/command/list_peers.rs @@ -54,7 +54,7 @@ impl CommandContext { _ => false, }) } - let peers = self.peer_manager.perform_query(query).await?; + let peers = self.comms.peer_manager().perform_query(query).await?; let num_peers = peers.len(); println!(); let mut table = Table::new(); diff --git a/applications/tari_base_node/src/commands/command/mod.rs b/applications/tari_base_node/src/commands/command/mod.rs index b928ff7b0f..e2ca78b1b6 100644 --- a/applications/tari_base_node/src/commands/command/mod.rs +++ b/applications/tari_base_node/src/commands/command/mod.rs @@ -65,9 +65,9 @@ use async_trait::async_trait; use clap::{CommandFactory, FromArgMatches, Parser, Subcommand}; use strum::{EnumVariantNames, VariantNames}; use tari_comms::{ - connectivity::ConnectivityRequester, - peer_manager::{Peer, PeerManager, PeerManagerError, PeerQuery}, + peer_manager::{Peer, PeerManagerError, PeerQuery}, protocol::rpc::RpcServerHandle, + CommsNode, NodeIdentity, }; use tari_comms_dht::{DhtDiscoveryRequester, MetricsCollectorHandle}; @@ -155,8 +155,7 @@ pub struct CommandContext { dht_metrics_collector: MetricsCollectorHandle, rpc_server: RpcServerHandle, base_node_identity: Arc, - peer_manager: Arc, - connectivity: ConnectivityRequester, + comms: CommsNode, liveness: LivenessHandle, node_service: LocalNodeCommsInterface, mempool_service: LocalMempoolService, @@ -176,8 +175,7 @@ impl CommandContext { dht_metrics_collector: ctx.base_node_dht().metrics_collector(), rpc_server: ctx.rpc_server(), base_node_identity: ctx.base_node_identity(), - peer_manager: ctx.base_node_comms().peer_manager(), - connectivity: ctx.base_node_comms().connectivity(), + comms: ctx.base_node_comms().clone(), liveness: ctx.liveness(), node_service: ctx.local_node(), mempool_service: ctx.local_mempool(), @@ -297,7 +295,7 @@ impl HandleCommand for CommandContext { impl CommandContext { async fn fetch_banned_peers(&self) -> Result, PeerManagerError> { - let pm = &self.peer_manager; + let pm = self.comms.peer_manager(); let query = PeerQuery::new().select_where(|p| p.is_banned()); pm.perform_query(query).await } diff --git a/applications/tari_base_node/src/commands/command/reset_offline_peers.rs b/applications/tari_base_node/src/commands/command/reset_offline_peers.rs index 2f780e97a1..b949aba1a4 100644 --- a/applications/tari_base_node/src/commands/command/reset_offline_peers.rs +++ b/applications/tari_base_node/src/commands/command/reset_offline_peers.rs @@ -40,7 +40,8 @@ impl HandleCommand for CommandContext { impl CommandContext { pub async fn reset_offline_peers(&self) -> Result<(), Error> { let num_updated = self - .peer_manager + .comms + .peer_manager() .update_each(|mut peer| { if peer.is_offline() { peer.set_offline(false); diff --git a/applications/tari_base_node/src/commands/command/status.rs b/applications/tari_base_node/src/commands/command/status.rs index f499b55059..f46a507f1e 100644 --- a/applications/tari_base_node/src/commands/command/status.rs +++ b/applications/tari_base_node/src/commands/command/status.rs @@ -27,6 +27,7 @@ use async_trait::async_trait; use chrono::{DateTime, NaiveDateTime, Utc}; use clap::Parser; use tari_app_utilities::consts; +use tari_comms::connection_manager::LivenessStatus; use tokio::time; use super::{CommandContext, HandleCommand}; @@ -47,6 +48,7 @@ impl HandleCommand for CommandContext { } impl CommandContext { + #[allow(clippy::too_many_lines)] pub async fn status(&mut self, output: StatusLineOutput) -> Result<(), Error> { let mut full_log = false; if self.last_time_full.elapsed() > Duration::from_secs(120) { @@ -102,7 +104,7 @@ impl CommandContext { status_line.add_field("Mempool", "query timed out"); }; - let conns = self.connectivity.get_active_connections().await?; + let conns = self.comms.connectivity().get_active_connections().await?; let (num_nodes, num_clients) = conns.iter().fold((0usize, 0usize), |(nodes, clients), conn| { if conn.peer_features().is_node() { (nodes + 1, clients) @@ -139,6 +141,19 @@ impl CommandContext { ); } + match self.comms.listening_info().liveness_status() { + LivenessStatus::Disabled => {}, + LivenessStatus::Checking => { + status_line.add("⏳️️"); + }, + LivenessStatus::Unreachable => { + status_line.add("‼️"); + }, + LivenessStatus::Live(latency) => { + status_line.add(format!("⚡️ {:.2?}", latency)); + }, + } + let target = "base_node::app::status"; match output { StatusLineOutput::StdOutAndLog => { diff --git a/applications/tari_base_node/src/commands/command/unban_all_peers.rs b/applications/tari_base_node/src/commands/command/unban_all_peers.rs index fde91d9e91..c3722dd85f 100644 --- a/applications/tari_base_node/src/commands/command/unban_all_peers.rs +++ b/applications/tari_base_node/src/commands/command/unban_all_peers.rs @@ -41,10 +41,11 @@ impl HandleCommand for CommandContext { impl CommandContext { pub async fn unban_all_peers(&self) -> Result<(), Error> { let query = PeerQuery::new().select_where(|p| p.is_banned()); - let peers = self.peer_manager.perform_query(query).await?; + let peer_manager = self.comms.peer_manager(); + let peers = peer_manager.perform_query(query).await?; let num_peers = peers.len(); for peer in peers { - if let Err(err) = self.peer_manager.unban_peer(&peer.node_id).await { + if let Err(err) = peer_manager.unban_peer(&peer.node_id).await { println!("Failed to unban peer: {}", err); } } diff --git a/applications/tari_base_node/src/commands/status_line.rs b/applications/tari_base_node/src/commands/status_line.rs index e2fa549430..188ecc6037 100644 --- a/applications/tari_base_node/src/commands/status_line.rs +++ b/applications/tari_base_node/src/commands/status_line.rs @@ -43,6 +43,10 @@ impl StatusLine { Default::default() } + pub fn add(&mut self, value: T) -> &mut Self { + self.add_field("", value) + } + pub fn add_field(&mut self, name: &'static str, value: T) -> &mut Self { self.fields.push((name, value.to_string())); self @@ -54,7 +58,7 @@ impl Display for StatusLine { write!(f, "{} ", Local::now().format("%H:%M"))?; let s = self.fields.iter().map(|(k, v)| format(k, v)).collect::>(); - write!(f, "{}", s.join(", ")) + write!(f, "{}", s.join(" ")) } } diff --git a/applications/tari_base_node/src/grpc/base_node_grpc_server.rs b/applications/tari_base_node/src/grpc/base_node_grpc_server.rs index e82ca58d9c..0eae5abb8f 100644 --- a/applications/tari_base_node/src/grpc/base_node_grpc_server.rs +++ b/applications/tari_base_node/src/grpc/base_node_grpc_server.rs @@ -1675,9 +1675,9 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { let sidechain_outputs = utxos .into_iter() .filter(|u| u.features.output_type.is_sidechain_type()) - .collect::>(); + .map(TryInto::try_into); - match sidechain_outputs.into_iter().map(TryInto::try_into).collect() { + match sidechain_outputs.collect() { Ok(outputs) => { let resp = tari_rpc::GetSideChainUtxosResponse { block_info: Some(tari_rpc::BlockInfo { diff --git a/applications/tari_console_wallet/Cargo.toml b/applications/tari_console_wallet/Cargo.toml index 8f78a9c42b..6b30325977 100644 --- a/applications/tari_console_wallet/Cargo.toml +++ b/applications/tari_console_wallet/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "tari_console_wallet" -version = "0.38.5" +version = "0.38.7" authors = ["The Tari Development Community"] edition = "2018" license = "BSD-3-Clause" diff --git a/applications/tari_merge_mining_proxy/Cargo.toml b/applications/tari_merge_mining_proxy/Cargo.toml index d0bac48767..6fab0f4765 100644 --- a/applications/tari_merge_mining_proxy/Cargo.toml +++ b/applications/tari_merge_mining_proxy/Cargo.toml @@ -4,7 +4,7 @@ authors = ["The Tari Development Community"] description = "The Tari merge mining proxy for xmrig" repository = "https://github.com/tari-project/tari" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [features] diff --git a/applications/tari_miner/Cargo.toml b/applications/tari_miner/Cargo.toml index f63f4f7291..3dffbc295d 100644 --- a/applications/tari_miner/Cargo.toml +++ b/applications/tari_miner/Cargo.toml @@ -4,7 +4,7 @@ authors = ["The Tari Development Community"] description = "The tari miner implementation" repository = "https://github.com/tari-project/tari" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/applications/tari_miner/src/difficulty.rs b/applications/tari_miner/src/difficulty.rs index d4ef569167..8e283d8348 100644 --- a/applications/tari_miner/src/difficulty.rs +++ b/applications/tari_miner/src/difficulty.rs @@ -22,9 +22,8 @@ use std::convert::TryInto; -use sha3::{Digest, Sha3_256}; use tari_app_grpc::tari_rpc::BlockHeader as grpc_header; -use tari_core::{blocks::BlockHeader, large_ints::U256}; +use tari_core::{blocks::BlockHeader, proof_of_work::sha3_difficulty}; use tari_utilities::epoch_time::EpochTime; use crate::errors::MinerError; @@ -34,7 +33,6 @@ pub type Difficulty = u64; #[derive(Clone)] pub struct BlockHeaderSha3 { pub header: BlockHeader, - hash_merge_mining: Sha3_256, pub hashes: u64, } @@ -43,19 +41,7 @@ impl BlockHeaderSha3 { #[allow(clippy::cast_sign_loss)] pub fn new(header: grpc_header) -> Result { let header: BlockHeader = header.try_into().map_err(MinerError::BlockHeader)?; - - let hash_merge_mining = Sha3_256::new().chain(header.mining_hash()); - - Ok(Self { - hash_merge_mining, - header, - hashes: 0, - }) - } - - #[inline] - fn get_hash_before_nonce(&self) -> Sha3_256 { - self.hash_merge_mining.clone() + Ok(Self { header, hashes: 0 }) } /// This function will update the timestamp of the header, but only if the new timestamp is greater than the current @@ -65,7 +51,6 @@ impl BlockHeaderSha3 { // should only change the timestamp if we move it forward. if timestamp > self.header.timestamp.as_u64() { self.header.timestamp = EpochTime::from(timestamp); - self.hash_merge_mining = Sha3_256::new().chain(self.header.mining_hash()); } } @@ -82,13 +67,7 @@ impl BlockHeaderSha3 { #[inline] pub fn difficulty(&mut self) -> Difficulty { self.hashes = self.hashes.saturating_add(1); - let hash = self - .get_hash_before_nonce() - .chain(self.header.nonce.to_le_bytes()) - .chain(self.header.pow.to_bytes()) - .finalize(); - let hash = Sha3_256::digest(&hash); - big_endian_difficulty(&hash) + sha3_difficulty(&self.header).into() } #[allow(clippy::cast_possible_wrap)] @@ -102,13 +81,6 @@ impl BlockHeaderSha3 { } } -/// This will provide the difficulty of the hash assuming the hash is big_endian -fn big_endian_difficulty(hash: &[u8]) -> Difficulty { - let scalar = U256::from_big_endian(hash); // Big endian so the hash has leading zeroes - let result = U256::MAX / scalar; - result.low_u64() -} - #[cfg(test)] pub mod test { use chrono::{DateTime, NaiveDate, Utc}; diff --git a/base_layer/common_types/Cargo.toml b/base_layer/common_types/Cargo.toml index e22cf81ef7..b19366d2b5 100644 --- a/base_layer/common_types/Cargo.toml +++ b/base_layer/common_types/Cargo.toml @@ -3,7 +3,7 @@ name = "tari_common_types" authors = ["The Tari Development Community"] description = "Tari cryptocurrency common types" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/base_layer/core/Cargo.toml b/base_layer/core/Cargo.toml index d97ccee92c..bd4ddd669b 100644 --- a/base_layer/core/Cargo.toml +++ b/base_layer/core/Cargo.toml @@ -6,7 +6,7 @@ repository = "https://github.com/tari-project/tari" homepage = "https://tari.com" readme = "README.md" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [features] diff --git a/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs b/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs index f1c1774015..e8f455ab44 100644 --- a/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs +++ b/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs @@ -489,6 +489,13 @@ where B: BlockchainBackend + 'static current_meta.best_block().to_hex(), source_peer, ); + if excess_sigs.is_empty() { + let block = BlockBuilder::new(header.version) + .with_coinbase_utxo(coinbase_output, coinbase_kernel) + .with_header(header.clone()) + .build(); + return Ok(block); + } metrics::compact_block_tx_misses(header.height).set(excess_sigs.len() as i64); let block = self.request_full_block_from_peer(source_peer, block_hash).await?; return Ok(block); diff --git a/base_layer/core/src/base_node/sync/config.rs b/base_layer/core/src/base_node/sync/config.rs index 5d3a331aae..5e11deb94f 100644 --- a/base_layer/core/src/base_node/sync/config.rs +++ b/base_layer/core/src/base_node/sync/config.rs @@ -56,13 +56,13 @@ pub struct BlockchainSyncConfig { impl Default for BlockchainSyncConfig { fn default() -> Self { Self { - initial_max_sync_latency: Duration::from_secs(20), + initial_max_sync_latency: Duration::from_secs(30), max_latency_increase: Duration::from_secs(2), ban_period: Duration::from_secs(30 * 60), short_ban_period: Duration::from_secs(60), forced_sync_peers: Default::default(), validation_concurrency: 6, - rpc_deadline: Duration::from_secs(10), + rpc_deadline: Duration::from_secs(30), } } } diff --git a/base_layer/core/src/chain_storage/blockchain_database.rs b/base_layer/core/src/chain_storage/blockchain_database.rs index ca893c34ca..ba442780d2 100644 --- a/base_layer/core/src/chain_storage/blockchain_database.rs +++ b/base_layer/core/src/chain_storage/blockchain_database.rs @@ -1693,18 +1693,12 @@ fn check_for_valid_height(db: &T, height: u64) -> Result<( /// Removes blocks from the db from current tip to specified height. /// Returns the blocks removed, ordered from tip to height. -fn rewind_to_height( - db: &mut T, - mut height: u64, -) -> Result>, ChainStorageError> { +fn rewind_to_height(db: &mut T, height: u64) -> Result>, ChainStorageError> { let last_header = db.fetch_last_header()?; - let mut txn = DbTransaction::new(); - // Delete headers let last_header_height = last_header.height; let metadata = db.fetch_chain_metadata()?; - let expected_block_hash = *metadata.best_block(); let last_block_height = metadata.height_of_longest_chain(); // We use the cmp::max value here because we'll only delete headers here and leave remaining headers to be deleted // with the whole block @@ -1727,20 +1721,20 @@ fn rewind_to_height( ); } // We might have more headers than blocks, so we first see if we need to delete the extra headers. - (0..steps_back).for_each(|h| { + for h in 0..steps_back { + let mut txn = DbTransaction::new(); info!( target: LOG_TARGET, "Rewinding headers at height {}", last_header_height - h ); txn.delete_header(last_header_height - h); - }); - + db.write(txn)?; + } // Delete blocks let mut steps_back = last_block_height.saturating_sub(height); // No blocks to remove, no need to update the best block if steps_back == 0 { - db.write(txn)?; return Ok(vec![]); } @@ -1761,22 +1755,45 @@ fn rewind_to_height( effective_pruning_horizon ); steps_back = effective_pruning_horizon; - height = 0; } - for h in 0..steps_back { + let mut txn = DbTransaction::new(); info!(target: LOG_TARGET, "Deleting block {}", last_block_height - h,); let block = fetch_block(db, last_block_height - h, false)?; let block = Arc::new(block.try_into_chain_block()?); txn.delete_block(*block.hash()); txn.delete_header(last_block_height - h); if !prune_past_horizon && !db.contains(&DbKey::OrphanBlock(*block.hash()))? { - // Because we know we will remove blocks we can't recover, this will be a destructive rewind, so we can't - // recover from this apart from resync from another peer. Failure here should not be common as - // this chain has a valid proof of work that has been tested at this point in time. + // Because we know we will remove blocks we can't recover, this will be a destructive rewind, so we + // can't recover from this apart from resync from another peer. Failure here + // should not be common as this chain has a valid proof of work that has been + // tested at this point in time. txn.insert_chained_orphan(block.clone()); } removed_blocks.push(block); + // Set best block to one before, to keep DB consistent. Or if we reached pruned horizon, set best block to 0. + let chain_header = db.fetch_chain_header_by_height(if prune_past_horizon && h + 1 == steps_back { + 0 + } else { + last_block_height - h - 1 + })?; + let metadata = db.fetch_chain_metadata()?; + let expected_block_hash = *metadata.best_block(); + txn.set_best_block( + chain_header.height(), + chain_header.accumulated_data().hash, + chain_header.accumulated_data().total_accumulated_difficulty, + expected_block_hash, + chain_header.timestamp(), + ); + // Update metadata + debug!( + target: LOG_TARGET, + "Updating best block to height (#{}), total accumulated difficulty: {}", + chain_header.height(), + chain_header.accumulated_data().total_accumulated_difficulty + ); + db.write(txn)?; } if prune_past_horizon { @@ -1785,6 +1802,7 @@ fn rewind_to_height( // We don't have these complete blocks, so we don't push them to the channel for further processing such as the // mempool add reorg'ed tx. for h in 0..(last_block_height - steps_back) { + let mut txn = DbTransaction::new(); debug!( target: LOG_TARGET, "Deleting blocks and utxos {}", @@ -1792,27 +1810,10 @@ fn rewind_to_height( ); let header = fetch_header(db, last_block_height - h - steps_back)?; txn.delete_block(header.hash()); + db.write(txn)?; } } - let chain_header = db.fetch_chain_header_by_height(height)?; - // Update metadata - debug!( - target: LOG_TARGET, - "Updating best block to height (#{}), total accumulated difficulty: {}", - chain_header.height(), - chain_header.accumulated_data().total_accumulated_difficulty - ); - - txn.set_best_block( - chain_header.height(), - chain_header.accumulated_data().hash, - chain_header.accumulated_data().total_accumulated_difficulty, - expected_block_hash, - chain_header.timestamp(), - ); - db.write(txn)?; - Ok(removed_blocks) } @@ -2419,6 +2420,10 @@ fn prune_to_height(db: &mut T, target_horizon_height: u64) txn.prune_outputs_at_positions(output_mmr_positions.to_vec()); txn.delete_all_inputs_in_block(*header.hash()); + if txn.operations().len() >= 100 { + txn.set_pruned_height(block_to_prune); + db.write(mem::take(&mut txn))?; + } } txn.set_pruned_height(target_horizon_height); diff --git a/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs b/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs index 75e238b088..a1bc2dcd11 100644 --- a/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs +++ b/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs @@ -445,3 +445,51 @@ pub fn lmdb_clear(txn: &WriteTransaction<'_>, db: &Database) -> Result( + txn: &WriteTransaction<'_>, + db: &Database, + f: F, +) -> Result<(), ChainStorageError> +where + F: Fn(V) -> Option, + V: DeserializeOwned, + R: Serialize, +{ + let mut access = txn.access(); + let mut cursor = txn.cursor(db).map_err(|e| { + error!(target: LOG_TARGET, "Could not get read cursor from lmdb: {:?}", e); + ChainStorageError::AccessError(e.to_string()) + })?; + let iter = CursorIter::new( + MaybeOwned::Borrowed(&mut cursor), + &access, + |c, a| c.first(a), + Cursor::next::<[u8], [u8]>, + )?; + let items = iter + .map(|r| r.map(|(k, v)| (k.to_vec(), v.to_vec()))) + .collect::, _>>()?; + + for (key, val) in items { + // let (key, val) = row?; + let val = deserialize::(&val)?; + if let Some(ret) = f(val) { + let ret_bytes = serialize(&ret)?; + access.put(db, &key, &ret_bytes, put::Flags::empty()).map_err(|e| { + if let lmdb_zero::Error::Code(code) = &e { + if *code == lmdb_zero::error::MAP_FULL { + return ChainStorageError::DbResizeRequired; + } + } + error!( + target: LOG_TARGET, + "Could not replace value in lmdb transaction: {:?}", e + ); + ChainStorageError::AccessError(e.to_string()) + })?; + } + } + Ok(()) +} diff --git a/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs b/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs index 7cbb3f1ac3..abd74ceb2f 100644 --- a/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs +++ b/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs @@ -313,6 +313,8 @@ impl LMDBDatabase { consensus_manager, }; + run_migrations(&db)?; + Ok(db) } @@ -2751,6 +2753,7 @@ enum MetadataKey { HorizonData, DeletedBitmap, BestBlockTimestamp, + MigrationVersion, } impl MetadataKey { @@ -2763,14 +2766,15 @@ impl MetadataKey { impl fmt::Display for MetadataKey { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { - MetadataKey::ChainHeight => f.write_str("Current chain height"), - MetadataKey::AccumulatedWork => f.write_str("Total accumulated work"), - MetadataKey::PruningHorizon => f.write_str("Pruning horizon"), - MetadataKey::PrunedHeight => f.write_str("Effective pruned height"), - MetadataKey::BestBlock => f.write_str("Chain tip block hash"), - MetadataKey::HorizonData => f.write_str("Database info"), - MetadataKey::DeletedBitmap => f.write_str("Deleted bitmap"), - MetadataKey::BestBlockTimestamp => f.write_str("Chain tip block timestamp"), + MetadataKey::ChainHeight => write!(f, "Current chain height"), + MetadataKey::AccumulatedWork => write!(f, "Total accumulated work"), + MetadataKey::PruningHorizon => write!(f, "Pruning horizon"), + MetadataKey::PrunedHeight => write!(f, "Effective pruned height"), + MetadataKey::BestBlock => write!(f, "Chain tip block hash"), + MetadataKey::HorizonData => write!(f, "Database info"), + MetadataKey::DeletedBitmap => write!(f, "Deleted bitmap"), + MetadataKey::BestBlockTimestamp => write!(f, "Chain tip block timestamp"), + MetadataKey::MigrationVersion => write!(f, "Migration version"), } } } @@ -2786,6 +2790,7 @@ enum MetadataValue { HorizonData(HorizonData), DeletedBitmap(DeletedBitmap), BestBlockTimestamp(u64), + MigrationVersion(u64), } impl fmt::Display for MetadataValue { @@ -2801,6 +2806,7 @@ impl fmt::Display for MetadataValue { write!(f, "Deleted Bitmap ({} indexes)", deleted.bitmap().cardinality()) }, MetadataValue::BestBlockTimestamp(timestamp) => write!(f, "Chain tip block timestamp is {}", timestamp), + MetadataValue::MigrationVersion(n) => write!(f, "Migration version {}", n), } } } @@ -2867,3 +2873,112 @@ impl<'a, 'b> DeletedBitmapModel<'a, WriteTransaction<'b>> { Ok(()) } } + +fn run_migrations(db: &LMDBDatabase) -> Result<(), ChainStorageError> { + const MIGRATION_VERSION: u64 = 1; + let txn = db.read_transaction()?; + + let k = MetadataKey::MigrationVersion; + let val = lmdb_get::<_, MetadataValue>(&*txn, &db.metadata_db, &k.as_u32())?; + let n = match val { + Some(MetadataValue::MigrationVersion(n)) => n, + Some(_) | None => 0, + }; + info!( + target: LOG_TARGET, + "Blockchain database is at v{} (required version: {})", n, MIGRATION_VERSION + ); + drop(txn); + + if n < MIGRATION_VERSION { + tari_script_execution_stack_bug_migration::migrate(db)?; + info!(target: LOG_TARGET, "Migrated database to version {}", MIGRATION_VERSION); + let txn = db.write_transaction()?; + lmdb_replace( + &txn, + &db.metadata_db, + &k.as_u32(), + &MetadataValue::MigrationVersion(MIGRATION_VERSION), + )?; + txn.commit()?; + } + + Ok(()) +} + +// TODO: this is a temporary fix, remove +mod tari_script_execution_stack_bug_migration { + use std::mem; + + use serde::{Deserialize, Serialize}; + use tari_common_types::types::{ComSignature, PublicKey}; + use tari_crypto::ristretto::{pedersen::PedersenCommitment, RistrettoPublicKey, RistrettoSchnorr}; + use tari_script::{ExecutionStack, HashValue, ScalarValue, StackItem}; + + use super::*; + use crate::{ + chain_storage::lmdb_db::lmdb::lmdb_map_inplace, + transactions::transaction_components::{SpentOutput, TransactionInputVersion}, + }; + + pub fn migrate(db: &LMDBDatabase) -> Result<(), ChainStorageError> { + { + let txn = db.read_transaction()?; + // Only perform migration if necessary + if lmdb_len(&txn, &db.inputs_db)? == 0 { + return Ok(()); + } + } + unsafe { + LMDBStore::resize(&db.env, &LMDBConfig::new(0, 1024 * 1024 * 1024, 0))?; + } + let txn = db.write_transaction()?; + lmdb_map_inplace(&txn, &db.inputs_db, |mut v: TransactionInputRowDataV0| { + let mut items = Vec::with_capacity(v.input.input_data.items.len()); + while let Some(item) = v.input.input_data.items.pop() { + if let StackItemV0::Commitment(ref commitment) = item { + let pk = PublicKey::from_bytes(commitment.as_bytes()).unwrap(); + items.push(StackItem::PublicKey(pk)); + } else { + items.push(unsafe { mem::transmute(item) }); + } + } + let mut v = unsafe { mem::transmute::<_, TransactionInputRowData>(v) }; + v.input.input_data = ExecutionStack::new(items); + Some(v) + })?; + txn.commit()?; + Ok(()) + } + + #[derive(Debug, Serialize, Deserialize)] + pub(crate) struct TransactionInputRowDataV0 { + pub input: TransactionInputV0, + pub header_hash: HashOutput, + pub mmr_position: u32, + pub hash: HashOutput, + } + + #[derive(Debug, Serialize, Deserialize)] + pub struct TransactionInputV0 { + version: TransactionInputVersion, + spent_output: SpentOutput, + input_data: ExecutionStackV0, + script_signature: ComSignature, + } + + #[derive(Debug, Serialize, Deserialize)] + struct ExecutionStackV0 { + items: Vec, + } + + #[derive(Debug, Serialize, Deserialize)] + enum StackItemV0 { + Number(i64), + Hash(HashValue), + Scalar(ScalarValue), + Commitment(PedersenCommitment), + PublicKey(RistrettoPublicKey), + Signature(RistrettoSchnorr), + } +} diff --git a/base_layer/core/src/consensus/consensus_constants.rs b/base_layer/core/src/consensus/consensus_constants.rs index e4cbcf6708..2b4badb98a 100644 --- a/base_layer/core/src/consensus/consensus_constants.rs +++ b/base_layer/core/src/consensus/consensus_constants.rs @@ -517,30 +517,54 @@ impl ConsensusConstants { target_time: 200, }); let (input_version_range, output_version_range, kernel_version_range) = version_zero(); - vec![ConsensusConstants { - effective_from_height: 0, - // Todo fix after test - coinbase_lock_height: 6, - blockchain_version: 0, - valid_blockchain_version_range: 0..=0, - future_time_limit: 540, - difficulty_block_window: 90, - max_block_transaction_weight: 127_795, - median_timestamp_count: 11, - emission_initial: 18_462_816_327 * uT, - emission_decay: &ESMERALDA_DECAY_PARAMS, - emission_tail: 800 * T, - max_randomx_seed_height: 3000, - proof_of_work: algos, - faucet_value: (10 * 4000) * T, - transaction_weight: TransactionWeight::v1(), - max_script_byte_size: 2048, - input_version_range, - output_version_range, - kernel_version_range, - permitted_output_types: Self::current_permitted_output_types(), - validator_node_timeout: 50, - }] + vec![ + ConsensusConstants { + effective_from_height: 0, + coinbase_lock_height: 6, + blockchain_version: 0, + valid_blockchain_version_range: 0..=0, + future_time_limit: 540, + difficulty_block_window: 90, + max_block_transaction_weight: 127_795, + median_timestamp_count: 11, + emission_initial: 18_462_816_327 * uT, + emission_decay: &ESMERALDA_DECAY_PARAMS, + emission_tail: 800 * T, + max_randomx_seed_height: 3000, + proof_of_work: algos.clone(), + faucet_value: (10 * 4000) * T, + transaction_weight: TransactionWeight::v1(), + max_script_byte_size: 2048, + input_version_range: input_version_range.clone(), + output_version_range: output_version_range.clone(), + kernel_version_range: kernel_version_range.clone(), + permitted_output_types: Self::current_permitted_output_types(), + validator_node_timeout: 50, + }, + ConsensusConstants { + effective_from_height: 23000, + coinbase_lock_height: 6, + blockchain_version: 1, + valid_blockchain_version_range: 0..=1, + future_time_limit: 540, + difficulty_block_window: 90, + max_block_transaction_weight: 127_795, + median_timestamp_count: 11, + emission_initial: 18_462_816_327 * uT, + emission_decay: &ESMERALDA_DECAY_PARAMS, + emission_tail: 800 * T, + max_randomx_seed_height: 3000, + proof_of_work: algos, + faucet_value: (10 * 4000) * T, + transaction_weight: TransactionWeight::v1(), + max_script_byte_size: 2048, + input_version_range, + output_version_range, + kernel_version_range, + permitted_output_types: Self::current_permitted_output_types(), + validator_node_timeout: 50, + }, + ] } pub fn mainnet() -> Vec { @@ -667,6 +691,11 @@ impl ConsensusConstantsBuilder { self } + pub fn with_blockchain_version(mut self, version: u16) -> Self { + self.consensus.blockchain_version = version; + self + } + pub fn build(self) -> ConsensusConstants { self.consensus } diff --git a/base_layer/core/src/consensus/consensus_encoding/script.rs b/base_layer/core/src/consensus/consensus_encoding/script.rs index 17e8aa7dce..ea11a27c33 100644 --- a/base_layer/core/src/consensus/consensus_encoding/script.rs +++ b/base_layer/core/src/consensus/consensus_encoding/script.rs @@ -31,7 +31,7 @@ use crate::consensus::{ConsensusDecoding, ConsensusEncoding, ConsensusEncodingSi impl ConsensusEncoding for TariScript { fn consensus_encode(&self, writer: &mut W) -> Result<(), io::Error> { - self.as_bytes().consensus_encode(writer) + self.to_bytes().consensus_encode(writer) } } @@ -54,7 +54,7 @@ impl ConsensusDecoding for TariScript { impl ConsensusEncoding for ExecutionStack { fn consensus_encode(&self, writer: &mut W) -> Result<(), io::Error> { - self.as_bytes().consensus_encode(writer) + self.to_bytes().consensus_encode(writer) } } diff --git a/base_layer/core/src/proof_of_work/sha3_pow.rs b/base_layer/core/src/proof_of_work/sha3_pow.rs index 4b79c29fa6..fe56685dd5 100644 --- a/base_layer/core/src/proof_of_work/sha3_pow.rs +++ b/base_layer/core/src/proof_of_work/sha3_pow.rs @@ -37,12 +37,19 @@ pub fn sha3_difficulty(header: &BlockHeader) -> Difficulty { } pub fn sha3_hash(header: &BlockHeader) -> Vec { - Sha3_256::new() - .chain(header.mining_hash()) - .chain(header.nonce.to_le_bytes()) - .chain(header.pow.to_bytes()) - .finalize() - .to_vec() + let sha = Sha3_256::new(); + match header.version { + 0 => sha + .chain(header.mining_hash()) + .chain(header.nonce.to_le_bytes()) + .chain(header.pow.to_bytes()), + _ => sha + .chain(header.nonce.to_le_bytes()) + .chain(header.mining_hash()) + .chain(header.pow.to_bytes()), + } + .finalize() + .to_vec() } fn sha3_difficulty_with_hash(header: &BlockHeader) -> (Difficulty, Vec) { diff --git a/base_layer/core/src/proto/transaction.rs b/base_layer/core/src/proto/transaction.rs index 700e7cb3d5..5a8b2beff3 100644 --- a/base_layer/core/src/proto/transaction.rs +++ b/base_layer/core/src/proto/transaction.rs @@ -168,7 +168,7 @@ impl TryFrom for proto::types::TransactionInput { if input.is_compact() { let output_hash = input.output_hash(); Ok(Self { - input_data: input.input_data.as_bytes(), + input_data: input.input_data.to_bytes(), script_signature: Some(input.script_signature.into()), output_hash: output_hash.to_vec(), ..Default::default() @@ -192,8 +192,8 @@ impl TryFrom for proto::types::TransactionInput { script: input .script() .map_err(|_| "Non-compact Transaction input should contain script".to_string())? - .as_bytes(), - input_data: input.input_data.as_bytes(), + .to_bytes(), + input_data: input.input_data.to_bytes(), script_signature: Some(input.script_signature.clone().into()), sender_offset_public_key: input .sender_offset_public_key() @@ -277,7 +277,7 @@ impl From for proto::types::TransactionOutput { features: Some(output.features.into()), commitment: Some(output.commitment.into()), range_proof: output.proof.to_vec(), - script: output.script.as_bytes(), + script: output.script.to_bytes(), sender_offset_public_key: output.sender_offset_public_key.as_bytes().to_vec(), metadata_signature: Some(output.metadata_signature.into()), covenant: output.covenant.to_bytes(), diff --git a/base_layer/core/src/transactions/transaction_components/transaction_input.rs b/base_layer/core/src/transactions/transaction_components/transaction_input.rs index 9c3e664bdf..2a48ed400a 100644 --- a/base_layer/core/src/transactions/transaction_components/transaction_input.rs +++ b/base_layer/core/src/transactions/transaction_components/transaction_input.rs @@ -270,9 +270,11 @@ impl TransactionInput { SpentOutput::OutputData { ref script, .. } => { match script.execute_with_context(&self.input_data, &context)? { StackItem::PublicKey(pubkey) => Ok(pubkey), - _ => Err(TransactionError::ScriptExecutionError( - "The script executed successfully but it did not leave a public key on the stack".to_string(), - )), + item => Err(TransactionError::ScriptExecutionError(format!( + "The script executed successfully but it did not leave a public key on the stack. Remaining \ + stack item was {:?}", + item + ))), } }, } diff --git a/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs b/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs index b3c4e91a34..8820bc18bf 100644 --- a/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs +++ b/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs @@ -138,7 +138,7 @@ impl From for proto::SingleRoundSenderData { metadata: Some(sender_data.metadata.into()), message: sender_data.message, features: Some(sender_data.features.into()), - script: sender_data.script.as_bytes(), + script: sender_data.script.to_bytes(), sender_offset_public_key: sender_data.sender_offset_public_key.to_vec(), public_commitment_nonce: sender_data.public_commitment_nonce.to_vec(), covenant: sender_data.covenant.to_consensus_bytes(), diff --git a/base_layer/core/tests/block_validation.rs b/base_layer/core/tests/block_validation.rs index 9659a99b55..01037db622 100644 --- a/base_layer/core/tests/block_validation.rs +++ b/base_layer/core/tests/block_validation.rs @@ -102,6 +102,7 @@ fn test_monero_blocks() { max_difficulty: 1.into(), target_time: 200, }) + .with_blockchain_version(0) .build(); let cm = ConsensusManager::builder(network).add_consensus_constants(cc).build(); let header_validator = HeaderValidator::new(cm.clone()); diff --git a/base_layer/core/tests/chain_storage_tests/chain_backend.rs b/base_layer/core/tests/chain_storage_tests/chain_backend.rs index fcdc74b6d7..822c456eee 100644 --- a/base_layer/core/tests/chain_storage_tests/chain_backend.rs +++ b/base_layer/core/tests/chain_storage_tests/chain_backend.rs @@ -33,7 +33,7 @@ use tari_test_utils::paths::create_temporary_data_path; use crate::helpers::database::create_orphan_block; #[test] -fn lmdb_insert_contains_delete_and_fetch_orphan() { +fn test_lmdb_insert_contains_delete_and_fetch_orphan() { let network = Network::LocalNet; let consensus = ConsensusManagerBuilder::new(network).build(); let mut db = create_test_db(); @@ -63,7 +63,7 @@ fn lmdb_insert_contains_delete_and_fetch_orphan() { } #[test] -fn lmdb_file_lock() { +fn test_lmdb_file_lock() { // Create temporary test folder let temp_path = create_temporary_data_path(); diff --git a/base_layer/core/tests/chain_storage_tests/chain_storage.rs b/base_layer/core/tests/chain_storage_tests/chain_storage.rs index 4fd53d9758..a69c5a71f5 100644 --- a/base_layer/core/tests/chain_storage_tests/chain_storage.rs +++ b/base_layer/core/tests/chain_storage_tests/chain_storage.rs @@ -75,7 +75,7 @@ use crate::helpers::{ }; #[test] -fn fetch_nonexistent_header() { +fn test_fetch_nonexistent_header() { let network = Network::LocalNet; let _consensus_manager = ConsensusManagerBuilder::new(network).build(); let store = create_test_blockchain_db(); @@ -84,7 +84,7 @@ fn fetch_nonexistent_header() { } #[test] -fn insert_and_fetch_header() { +fn test_insert_and_fetch_header() { let network = Network::LocalNet; let _consensus_manager = ConsensusManagerBuilder::new(network).build(); let store = create_test_blockchain_db(); @@ -110,7 +110,7 @@ fn insert_and_fetch_header() { } #[test] -fn insert_and_fetch_orphan() { +fn test_insert_and_fetch_orphan() { let network = Network::LocalNet; let consensus_manager = ConsensusManagerBuilder::new(network).build(); let store = create_test_blockchain_db(); @@ -127,7 +127,7 @@ fn insert_and_fetch_orphan() { } #[test] -fn store_and_retrieve_block() { +fn test_store_and_retrieve_block() { let (db, blocks, _, _) = create_new_blockchain(Network::LocalNet); let hash = blocks[0].hash(); // Check the metadata @@ -144,7 +144,7 @@ fn store_and_retrieve_block() { } #[test] -fn add_multiple_blocks() { +fn test_add_multiple_blocks() { // Create new database with genesis block let network = Network::LocalNet; let consensus_manager = ConsensusManagerBuilder::new(network).build(); @@ -201,7 +201,7 @@ fn test_checkpoints() { #[test] #[allow(clippy::identity_op)] -fn rewind_to_height() { +fn test_rewind_to_height() { let _ = env_logger::builder().is_test(true).try_init(); let network = Network::LocalNet; let (mut db, mut blocks, mut outputs, consensus_manager) = create_new_blockchain(network); @@ -277,7 +277,7 @@ fn test_coverage_chain_storage() { } #[test] -fn rewind_past_horizon_height() { +fn test_rewind_past_horizon_height() { let network = Network::LocalNet; let block0 = genesis_block::get_esmeralda_genesis_block(); let consensus_manager = ConsensusManagerBuilder::new(network).with_block(block0.clone()).build(); @@ -320,7 +320,7 @@ fn rewind_past_horizon_height() { } #[test] -fn handle_tip_reorg() { +fn test_handle_tip_reorg() { // GB --> A1 --> A2(Low PoW) [Main Chain] // \--> B2(Highest PoW) [Forked Chain] // Initially, the main chain is GB->A1->A2. B2 has a higher accumulated PoW and when B2 is added the main chain is @@ -388,7 +388,7 @@ fn handle_tip_reorg() { #[test] #[allow(clippy::identity_op)] #[allow(clippy::too_many_lines)] -fn handle_reorg() { +fn test_handle_reorg() { // GB --> A1 --> A2 --> A3 -----> A4(Low PoW) [Main Chain] // \--> B2 --> B3(?) --> B4(Medium PoW) [Forked Chain 1] // \-----> C4(Highest PoW) [Forked Chain 2] @@ -561,7 +561,7 @@ fn handle_reorg() { #[test] #[allow(clippy::too_many_lines)] -fn reorgs_should_update_orphan_tips() { +fn test_reorgs_should_update_orphan_tips() { // Create a main chain GB -> A1 -> A2 // Create an orphan chain GB -> B1 // Add a block B2 that forces a reorg to B2 @@ -810,7 +810,7 @@ fn reorgs_should_update_orphan_tips() { } #[test] -fn handle_reorg_with_no_removed_blocks() { +fn test_handle_reorg_with_no_removed_blocks() { // GB --> A1 // \--> B2 (?) --> B3) // Initially, the main chain is GB->A1 with orphaned blocks B3. When B2 arrives late and is @@ -883,7 +883,7 @@ fn handle_reorg_with_no_removed_blocks() { } #[test] -fn handle_reorg_failure_recovery() { +fn test_handle_reorg_failure_recovery() { // GB --> A1 --> A2 --> A3 -----> A4(Low PoW) [Main Chain] // \--> B2 --> B3(double spend - rejected by db) [Forked Chain 1] // \--> B2 --> B3'(validation failed) [Forked Chain 1] @@ -1002,7 +1002,7 @@ fn handle_reorg_failure_recovery() { } #[test] -fn store_and_retrieve_blocks() { +fn test_store_and_retrieve_blocks() { let validators = Validators::new( MockValidator::new(true), MockValidator::new(true), @@ -1064,7 +1064,7 @@ fn store_and_retrieve_blocks() { #[test] #[allow(clippy::identity_op)] -fn store_and_retrieve_blocks_from_contents() { +fn test_store_and_retrieve_blocks_from_contents() { let network = Network::LocalNet; let (mut db, mut blocks, mut outputs, consensus_manager) = create_new_blockchain(network); @@ -1102,7 +1102,7 @@ fn store_and_retrieve_blocks_from_contents() { } #[test] -fn restore_metadata_and_pruning_horizon_update() { +fn test_restore_metadata_and_pruning_horizon_update() { // Perform test let validators = Validators::new( MockValidator::new(true), @@ -1177,7 +1177,7 @@ fn restore_metadata_and_pruning_horizon_update() { } static EMISSION: [u64; 2] = [10, 10]; #[test] -fn invalid_block() { +fn test_invalid_block() { let factories = CryptoFactories::default(); let network = Network::LocalNet; let consensus_constants = ConsensusConstantsBuilder::new(network) @@ -1278,7 +1278,7 @@ fn invalid_block() { } #[test] -fn orphan_cleanup_on_block_add() { +fn test_orphan_cleanup_on_block_add() { let network = Network::LocalNet; let consensus_manager = ConsensusManagerBuilder::new(network).build(); let validators = Validators::new( @@ -1345,7 +1345,7 @@ fn orphan_cleanup_on_block_add() { } #[test] -fn horizon_height_orphan_cleanup() { +fn test_horizon_height_orphan_cleanup() { let network = Network::LocalNet; let block0 = genesis_block::get_esmeralda_genesis_block(); let consensus_manager = ConsensusManagerBuilder::new(network).with_block(block0.clone()).build(); @@ -1405,7 +1405,7 @@ fn horizon_height_orphan_cleanup() { #[test] #[allow(clippy::too_many_lines)] -fn orphan_cleanup_on_reorg() { +fn test_orphan_cleanup_on_reorg() { // Create Main Chain let network = Network::LocalNet; let factories = CryptoFactories::default(); @@ -1541,7 +1541,7 @@ fn orphan_cleanup_on_reorg() { } #[test] -fn orphan_cleanup_delete_all_orphans() { +fn test_orphan_cleanup_delete_all_orphans() { let path = create_temporary_data_path(); let network = Network::LocalNet; let validators = Validators::new( @@ -1646,7 +1646,7 @@ fn orphan_cleanup_delete_all_orphans() { } #[test] -fn fails_validation() { +fn test_fails_validation() { let network = Network::LocalNet; let factories = CryptoFactories::default(); let consensus_constants = ConsensusConstantsBuilder::new(network).build(); @@ -1757,8 +1757,7 @@ mod malleability { // This test hightlights that the "version" field is not being included in the input hash // so a consensus change is needed for the input to include it #[test] - #[ignore] - fn version() { + fn test_version() { check_input_malleability(|block: &mut Block| { let input = &mut block.body.inputs_mut()[0]; let mod_version = match input.version { @@ -1770,7 +1769,7 @@ mod malleability { } #[test] - fn spent_output() { + fn test_spent_output() { check_input_malleability(|block: &mut Block| { // to modify the spent output, we will substitue it for a copy of a different output // we will use one of the outputs of the current transaction @@ -1791,7 +1790,7 @@ mod malleability { } #[test] - fn input_data() { + fn test_input_data() { check_input_malleability(|block: &mut Block| { block.body.inputs_mut()[0] .input_data @@ -1801,7 +1800,7 @@ mod malleability { } #[test] - fn script_signature() { + fn test_script_signature() { check_input_malleability(|block: &mut Block| { let input = &mut block.body.inputs_mut()[0]; input.script_signature = ComSignature::default(); @@ -1813,7 +1812,7 @@ mod malleability { use super::*; #[test] - fn version() { + fn test_version() { check_output_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; let mod_version = match output.version { @@ -1825,7 +1824,7 @@ mod malleability { } #[test] - fn features() { + fn test_features() { check_output_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; output.features.maturity += 1; @@ -1833,7 +1832,7 @@ mod malleability { } #[test] - fn commitment() { + fn test_commitment() { check_output_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; let mod_commitment = &output.commitment + &output.commitment; @@ -1842,7 +1841,7 @@ mod malleability { } #[test] - fn proof() { + fn test_proof() { check_witness_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; let mod_proof = RangeProof::from_hex(&(output.proof.to_hex() + "00")).unwrap(); @@ -1851,10 +1850,10 @@ mod malleability { } #[test] - fn script() { + fn test_script() { check_output_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; - let mut script_bytes = output.script.as_bytes(); + let mut script_bytes = output.script.to_bytes(); Opcode::PushZero.to_bytes(&mut script_bytes); let mod_script = TariScript::from_bytes(&script_bytes).unwrap(); output.script = mod_script; @@ -1864,8 +1863,7 @@ mod malleability { // This test hightlights that the "sender_offset_public_key" field is not being included in the output hash // so a consensus change is needed for the output to include it #[test] - #[ignore] - fn sender_offset_public_key() { + fn test_sender_offset_public_key() { check_output_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; @@ -1876,7 +1874,7 @@ mod malleability { } #[test] - fn metadata_signature() { + fn test_metadata_signature() { check_witness_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; output.metadata_signature = ComSignature::default(); @@ -1884,7 +1882,7 @@ mod malleability { } #[test] - fn covenant() { + fn test_covenant() { check_output_malleability(|block: &mut Block| { let output = &mut block.body.outputs_mut()[0]; let mod_covenant = covenant!(absolute_height(@uint(42))); @@ -1903,7 +1901,7 @@ mod malleability { // the "features" field has only a constant value at the moment, so no malleability test possible #[test] - fn fee() { + fn test_fee() { check_kernel_malleability(|block: &mut Block| { let kernel = &mut block.body.kernels_mut()[0]; kernel.fee += MicroTari::from(1); @@ -1911,7 +1909,7 @@ mod malleability { } #[test] - fn lock_height() { + fn test_lock_height() { check_kernel_malleability(|block: &mut Block| { let kernel = &mut block.body.kernels_mut()[0]; kernel.lock_height += 1; @@ -1919,7 +1917,7 @@ mod malleability { } #[test] - fn excess() { + fn test_excess() { check_kernel_malleability(|block: &mut Block| { let kernel = &mut block.body.kernels_mut()[0]; let mod_excess = &kernel.excess + &kernel.excess; @@ -1928,7 +1926,7 @@ mod malleability { } #[test] - fn excess_sig() { + fn test_excess_sig() { check_kernel_malleability(|block: &mut Block| { let kernel = &mut block.body.kernels_mut()[0]; // "gerate_keys" should return a group of random keys, different from the ones in the field @@ -1941,7 +1939,7 @@ mod malleability { #[allow(clippy::identity_op)] #[test] -fn fetch_deleted_position_block_hash() { +fn test_fetch_deleted_position_block_hash() { // Create Main Chain let network = Network::LocalNet; let (mut store, mut blocks, mut outputs, consensus_manager) = create_new_blockchain(network); diff --git a/base_layer/key_manager/Cargo.toml b/base_layer/key_manager/Cargo.toml index 75d69a695f..843b4b3d50 100644 --- a/base_layer/key_manager/Cargo.toml +++ b/base_layer/key_manager/Cargo.toml @@ -4,7 +4,7 @@ authors = ["The Tari Development Community"] description = "Tari cryptocurrency wallet key management" repository = "https://github.com/tari-project/tari" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2021" [lib] diff --git a/base_layer/mmr/Cargo.toml b/base_layer/mmr/Cargo.toml index 5774c5b1ea..7cadc4811f 100644 --- a/base_layer/mmr/Cargo.toml +++ b/base_layer/mmr/Cargo.toml @@ -4,7 +4,7 @@ authors = ["The Tari Development Community"] description = "A Merkle Mountain Range implementation" repository = "https://github.com/tari-project/tari" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [features] diff --git a/base_layer/p2p/Cargo.toml b/base_layer/p2p/Cargo.toml index 665991d18a..d6c255d2f2 100644 --- a/base_layer/p2p/Cargo.toml +++ b/base_layer/p2p/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "tari_p2p" -version = "0.38.5" +version = "0.38.7" authors = ["The Tari Development community"] description = "Tari base layer-specific peer-to-peer communication features" repository = "https://github.com/tari-project/tari" diff --git a/base_layer/p2p/src/config.rs b/base_layer/p2p/src/config.rs index 41cd121d99..5fb4030411 100644 --- a/base_layer/p2p/src/config.rs +++ b/base_layer/p2p/src/config.rs @@ -20,11 +20,15 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use std::path::{Path, PathBuf}; +use std::{ + path::{Path, PathBuf}, + time::Duration, +}; use serde::{Deserialize, Serialize}; use tari_common::{ configuration::{ + serializers, utils::{deserialize_string_or_struct, serialize_string}, StringList, }, @@ -105,6 +109,9 @@ pub struct P2pConfig { /// Liveness sessions can be used by third party tooling to determine node liveness. /// A value of 0 will disallow any liveness sessions. pub listener_liveness_max_sessions: usize, + /// If Some, enables periodic socket-level liveness checks + #[serde(with = "serializers::optional_seconds")] + pub listener_liveness_check_interval: Option, /// CIDR for addresses allowed to enter into liveness check mode on the listener. pub listener_liveness_allowlist_cidrs: StringList, /// User agent string for this node @@ -137,6 +144,7 @@ impl Default for P2pConfig { }, allow_test_addresses: false, listener_liveness_max_sessions: 0, + listener_liveness_check_interval: None, listener_liveness_allowlist_cidrs: StringList::default(), user_agent: String::new(), auxiliary_tcp_listener_address: None, diff --git a/base_layer/p2p/src/initialization.rs b/base_layer/p2p/src/initialization.rs index 8f6d0c2147..43c6218018 100644 --- a/base_layer/p2p/src/initialization.rs +++ b/base_layer/p2p/src/initialization.rs @@ -543,7 +543,8 @@ impl ServiceInitializer for P2pInitializer { minor_version: MINOR_NETWORK_VERSION, network_byte: self.network.as_byte(), user_agent: config.user_agent.clone(), - }); + }) + .set_liveness_check(config.listener_liveness_check_interval); if config.allow_test_addresses || config.dht.allow_test_addresses { // The default is false, so ensure that both settings are true in this case diff --git a/base_layer/p2p/src/services/liveness/service.rs b/base_layer/p2p/src/services/liveness/service.rs index def15f5116..5da92ad100 100644 --- a/base_layer/p2p/src/services/liveness/service.rs +++ b/base_layer/p2p/src/services/liveness/service.rs @@ -161,7 +161,7 @@ where match ping_pong_msg.kind().ok_or(LivenessError::InvalidPingPongType)? { PingPong::Ping => { self.state.inc_pings_received(); - self.send_pong(ping_pong_msg.nonce, public_key).await.unwrap(); + self.send_pong(ping_pong_msg.nonce, public_key).await?; self.state.inc_pongs_sent(); debug!( diff --git a/base_layer/service_framework/Cargo.toml b/base_layer/service_framework/Cargo.toml index f70eb71d7a..a210101ada 100644 --- a/base_layer/service_framework/Cargo.toml +++ b/base_layer/service_framework/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "tari_service_framework" -version = "0.38.5" +version = "0.38.7" authors = ["The Tari Development Community"] description = "The Tari communication stack service framework" repository = "https://github.com/tari-project/tari" diff --git a/base_layer/tari_mining_helper_ffi/Cargo.toml b/base_layer/tari_mining_helper_ffi/Cargo.toml index 53072f6487..847a26b6c3 100644 --- a/base_layer/tari_mining_helper_ffi/Cargo.toml +++ b/base_layer/tari_mining_helper_ffi/Cargo.toml @@ -3,7 +3,7 @@ name = "tari_mining_helper_ffi" authors = ["The Tari Development Community"] description = "Tari cryptocurrency miningcore C FFI bindings" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/base_layer/wallet/Cargo.toml b/base_layer/wallet/Cargo.toml index cf62400004..1637d03106 100644 --- a/base_layer/wallet/Cargo.toml +++ b/base_layer/wallet/Cargo.toml @@ -3,7 +3,7 @@ name = "tari_wallet" authors = ["The Tari Development Community"] description = "Tari cryptocurrency wallet library" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/base_layer/wallet/src/config.rs b/base_layer/wallet/src/config.rs index ca5a472efe..00df3f5729 100644 --- a/base_layer/wallet/src/config.rs +++ b/base_layer/wallet/src/config.rs @@ -123,6 +123,7 @@ impl Default for WalletConfig { fn default() -> Self { let p2p = P2pConfig { datastore_path: PathBuf::from("peer_db/wallet"), + listener_liveness_check_interval: None, ..Default::default() }; Self { diff --git a/base_layer/wallet/src/output_manager_service/storage/sqlite_db/mod.rs b/base_layer/wallet/src/output_manager_service/storage/sqlite_db/mod.rs index 0710212f94..45968f6708 100644 --- a/base_layer/wallet/src/output_manager_service/storage/sqlite_db/mod.rs +++ b/base_layer/wallet/src/output_manager_service/storage/sqlite_db/mod.rs @@ -391,56 +391,54 @@ impl OutputManagerBackend for OutputManagerSqliteDatabase { let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - match op { - WriteOperation::Insert(kvp) => self.insert(kvp, &conn)?, + let mut msg = "".to_string(); + let result = match op { + WriteOperation::Insert(kvp) => { + msg.push_str("Insert"); + self.insert(kvp, &conn)?; + Ok(None) + }, WriteOperation::Remove(k) => match k { DbKey::AnyOutputByCommitment(commitment) => { - // Used by coinbase when mining. - match OutputSql::find_by_commitment(&commitment.to_vec(), &conn) { - Ok(mut o) => { - o.delete(&conn)?; - self.decrypt_if_necessary(&mut o)?; - if start.elapsed().as_millis() > 0 { - trace!( - target: LOG_TARGET, - "sqlite profile - write Remove: lock {} + db_op {} = {} ms", - acquire_lock.as_millis(), - (start.elapsed() - acquire_lock).as_millis(), - start.elapsed().as_millis() - ); - } - return Ok(Some(DbValue::AnyOutput(Box::new(DbUnblindedOutput::try_from(o)?)))); - }, - Err(e) => { - match e { - OutputManagerStorageError::DieselError(DieselError::NotFound) => (), - e => return Err(e), - }; - }, - } + conn.transaction::<_, _, _>(|| { + msg.push_str("Remove"); + // Used by coinbase when mining. + match OutputSql::find_by_commitment(&commitment.to_vec(), &conn) { + Ok(mut o) => { + o.delete(&conn)?; + self.decrypt_if_necessary(&mut o)?; + Ok(Some(DbValue::AnyOutput(Box::new(DbUnblindedOutput::try_from(o)?)))) + }, + Err(e) => match e { + OutputManagerStorageError::DieselError(DieselError::NotFound) => Ok(None), + e => Err(e), + }, + } + }) }, - DbKey::SpentOutput(_s) => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::UnspentOutputHash(_h) => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::UnspentOutput(_k) => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::UnspentOutputs => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::SpentOutputs => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::InvalidOutputs => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::TimeLockedUnspentOutputs(_) => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::KnownOneSidedPaymentScripts => return Err(OutputManagerStorageError::OperationNotSupported), - DbKey::OutputsByTxIdAndStatus(_, _) => return Err(OutputManagerStorageError::OperationNotSupported), + DbKey::SpentOutput(_s) => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::UnspentOutputHash(_h) => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::UnspentOutput(_k) => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::UnspentOutputs => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::SpentOutputs => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::InvalidOutputs => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::TimeLockedUnspentOutputs(_) => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::KnownOneSidedPaymentScripts => Err(OutputManagerStorageError::OperationNotSupported), + DbKey::OutputsByTxIdAndStatus(_, _) => Err(OutputManagerStorageError::OperationNotSupported), }, - } + }; if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, - "sqlite profile - write Insert: lock {} + db_op {} = {} ms", + "sqlite profile - write {}: lock {} + db_op {} = {} ms", + msg, acquire_lock.as_millis(), (start.elapsed() - acquire_lock).as_millis(), start.elapsed().as_millis() ); } - Ok(None) + result } fn fetch_pending_incoming_outputs(&self) -> Result, OutputManagerStorageError> { @@ -852,50 +850,55 @@ impl OutputManagerBackend for OutputManagerSqliteDatabase { let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - let outputs = OutputSql::find_by_tx_id_and_encumbered(tx_id, &conn)?; + conn.transaction::<_, _, _>(|| { + let outputs = OutputSql::find_by_tx_id_and_encumbered(tx_id, &conn)?; - if outputs.is_empty() { - return Err(OutputManagerStorageError::ValueNotFound); - } + if outputs.is_empty() { + return Err(OutputManagerStorageError::ValueNotFound); + } - for output in &outputs { - if output.received_in_tx_id == Some(tx_id.as_i64_wrapped()) { - info!( - target: LOG_TARGET, - "Cancelling pending inbound output with Commitment: {} - MMR Position: {:?} from TxId: {}", - output.commitment.as_ref().unwrap_or(&vec![]).to_hex(), - output.mined_mmr_position, - tx_id - ); - output.update( - UpdateOutput { - status: Some(OutputStatus::CancelledInbound), - ..Default::default() - }, - &conn, - )?; - } else if output.spent_in_tx_id == Some(tx_id.as_i64_wrapped()) { - info!( - target: LOG_TARGET, - "Cancelling pending outbound output with Commitment: {} - MMR Position: {:?} from TxId: {}", - output.commitment.as_ref().unwrap_or(&vec![]).to_hex(), - output.mined_mmr_position, - tx_id - ); - output.update( - UpdateOutput { - status: Some(OutputStatus::Unspent), - spent_in_tx_id: Some(None), - // We clear these so that the output will be revalidated the next time a validation is done. - mined_height: Some(None), - mined_in_block: Some(None), - ..Default::default() - }, - &conn, - )?; - } else { + for output in &outputs { + if output.received_in_tx_id == Some(tx_id.as_i64_wrapped()) { + info!( + target: LOG_TARGET, + "Cancelling pending inbound output with Commitment: {} - MMR Position: {:?} from TxId: {}", + output.commitment.as_ref().unwrap_or(&vec![]).to_hex(), + output.mined_mmr_position, + tx_id + ); + output.update( + UpdateOutput { + status: Some(OutputStatus::CancelledInbound), + ..Default::default() + }, + &conn, + )?; + } else if output.spent_in_tx_id == Some(tx_id.as_i64_wrapped()) { + info!( + target: LOG_TARGET, + "Cancelling pending outbound output with Commitment: {} - MMR Position: {:?} from TxId: {}", + output.commitment.as_ref().unwrap_or(&vec![]).to_hex(), + output.mined_mmr_position, + tx_id + ); + output.update( + UpdateOutput { + status: Some(OutputStatus::Unspent), + spent_in_tx_id: Some(None), + // We clear these so that the output will be revalidated the next time a validation is done. + mined_height: Some(None), + mined_in_block: Some(None), + ..Default::default() + }, + &conn, + )?; + } else { + } } - } + + Ok(()) + })?; + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -915,17 +918,22 @@ impl OutputManagerBackend for OutputManagerSqliteDatabase { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - let db_output = OutputSql::find_by_commitment_and_cancelled(&output.commitment.to_vec(), false, &conn)?; - db_output.update( - // Note: Only the `nonce` and `u` portion needs to be updated at this time as the `v` portion is already - // correct - UpdateOutput { - metadata_signature_nonce: Some(output.metadata_signature.public_nonce().to_vec()), - metadata_signature_u_key: Some(output.metadata_signature.u().to_vec()), - ..Default::default() - }, - &conn, - )?; + + conn.transaction::<_, OutputManagerStorageError, _>(|| { + let db_output = OutputSql::find_by_commitment_and_cancelled(&output.commitment.to_vec(), false, &conn)?; + db_output.update( + // Note: Only the `nonce` and `u` portion needs to be updated at this time as the `v` portion is + // already correct + UpdateOutput { + metadata_signature_nonce: Some(output.metadata_signature.public_nonce().to_vec()), + metadata_signature_u_key: Some(output.metadata_signature.u().to_vec()), + ..Default::default() + }, + &conn, + )?; + + Ok(()) + })?; if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -943,18 +951,23 @@ impl OutputManagerBackend for OutputManagerSqliteDatabase { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - let output = OutputSql::find_by_commitment_and_cancelled(&commitment.to_vec(), false, &conn)?; - if OutputStatus::try_from(output.status)? != OutputStatus::Invalid { - return Err(OutputManagerStorageError::ValuesNotFound); - } - output.update( - UpdateOutput { - status: Some(OutputStatus::Unspent), - ..Default::default() - }, - &conn, - )?; + conn.transaction::<_, _, _>(|| { + let output = OutputSql::find_by_commitment_and_cancelled(&commitment.to_vec(), false, &conn)?; + + if OutputStatus::try_from(output.status)? != OutputStatus::Invalid { + return Err(OutputManagerStorageError::ValuesNotFound); + } + output.update( + UpdateOutput { + status: Some(OutputStatus::Unspent), + ..Default::default() + }, + &conn, + )?; + + Ok(()) + })?; if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -1417,8 +1430,8 @@ impl From for KnownOneSidedPaymentScriptSql { let script_lock_height = known_script.script_lock_height as i64; let script_hash = known_script.script_hash; let private_key = known_script.private_key.as_bytes().to_vec(); - let script = known_script.script.as_bytes().to_vec(); - let input = known_script.input.as_bytes().to_vec(); + let script = known_script.script.to_bytes().to_vec(); + let input = known_script.input.to_bytes().to_vec(); KnownOneSidedPaymentScriptSql { script_hash, private_key, diff --git a/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs b/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs index d3d2561ee8..2878e54a6c 100644 --- a/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs +++ b/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs @@ -83,8 +83,8 @@ impl NewOutputSql { status: status as i32, received_in_tx_id: received_in_tx_id.map(|i| i.as_u64() as i64), hash: Some(output.hash.to_vec()), - script: output.unblinded_output.script.as_bytes(), - input_data: output.unblinded_output.input_data.as_bytes(), + script: output.unblinded_output.script.to_bytes(), + input_data: output.unblinded_output.input_data.to_bytes(), script_private_key: output.unblinded_output.script_private_key.to_vec(), metadata: Some(output.unblinded_output.features.metadata.clone()), sender_offset_public_key: output.unblinded_output.sender_offset_public_key.to_vec(), diff --git a/base_layer/wallet/src/transaction_service/storage/database.rs b/base_layer/wallet/src/transaction_service/storage/database.rs index f018ba3088..8a7eec4100 100644 --- a/base_layer/wallet/src/transaction_service/storage/database.rs +++ b/base_layer/wallet/src/transaction_service/storage/database.rs @@ -117,7 +117,7 @@ pub trait TransactionBackend: Send + Sync + Clone { /// Mark a pending transaction direct send attempt as a success fn mark_direct_send_success(&self, tx_id: TxId) -> Result<(), TransactionStorageError>; /// Cancel coinbase transactions at a specific block height - fn cancel_coinbase_transaction_at_block_height(&self, block_height: u64) -> Result<(), TransactionStorageError>; + fn cancel_coinbase_transactions_at_block_height(&self, block_height: u64) -> Result<(), TransactionStorageError>; /// Find coinbase transaction at a specific block height for a given amount fn find_coinbase_transaction_at_block_height( &self, @@ -693,7 +693,7 @@ where T: TransactionBackend + 'static &self, block_height: u64, ) -> Result<(), TransactionStorageError> { - self.db.cancel_coinbase_transaction_at_block_height(block_height) + self.db.cancel_coinbase_transactions_at_block_height(block_height) } pub fn find_coinbase_transaction_at_block_height( diff --git a/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs b/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs index 92a101cadc..7d244ca817 100644 --- a/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs +++ b/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs @@ -123,44 +123,50 @@ impl TransactionServiceSqliteDatabase { fn remove(&self, key: DbKey, conn: &SqliteConnection) -> Result, TransactionStorageError> { match key { - DbKey::PendingOutboundTransaction(k) => match OutboundTransactionSql::find_by_cancelled(k, false, conn) { - Ok(mut v) => { - v.delete(conn)?; - self.decrypt_if_necessary(&mut v)?; - Ok(Some(DbValue::PendingOutboundTransaction(Box::new( - OutboundTransaction::try_from(v)?, - )))) - }, - Err(TransactionStorageError::DieselError(DieselError::NotFound)) => Err( - TransactionStorageError::ValueNotFound(DbKey::PendingOutboundTransaction(k)), - ), - Err(e) => Err(e), + DbKey::PendingOutboundTransaction(k) => { + conn.transaction::<_, _, _>(|| match OutboundTransactionSql::find_by_cancelled(k, false, conn) { + Ok(mut v) => { + v.delete(conn)?; + self.decrypt_if_necessary(&mut v)?; + Ok(Some(DbValue::PendingOutboundTransaction(Box::new( + OutboundTransaction::try_from(v)?, + )))) + }, + Err(TransactionStorageError::DieselError(DieselError::NotFound)) => Err( + TransactionStorageError::ValueNotFound(DbKey::PendingOutboundTransaction(k)), + ), + Err(e) => Err(e), + }) }, - DbKey::PendingInboundTransaction(k) => match InboundTransactionSql::find_by_cancelled(k, false, conn) { - Ok(mut v) => { - v.delete(conn)?; - self.decrypt_if_necessary(&mut v)?; - Ok(Some(DbValue::PendingInboundTransaction(Box::new( - InboundTransaction::try_from(v)?, - )))) - }, - Err(TransactionStorageError::DieselError(DieselError::NotFound)) => Err( - TransactionStorageError::ValueNotFound(DbKey::PendingOutboundTransaction(k)), - ), - Err(e) => Err(e), + DbKey::PendingInboundTransaction(k) => { + conn.transaction::<_, _, _>(|| match InboundTransactionSql::find_by_cancelled(k, false, conn) { + Ok(mut v) => { + v.delete(conn)?; + self.decrypt_if_necessary(&mut v)?; + Ok(Some(DbValue::PendingInboundTransaction(Box::new( + InboundTransaction::try_from(v)?, + )))) + }, + Err(TransactionStorageError::DieselError(DieselError::NotFound)) => Err( + TransactionStorageError::ValueNotFound(DbKey::PendingOutboundTransaction(k)), + ), + Err(e) => Err(e), + }) }, - DbKey::CompletedTransaction(k) => match CompletedTransactionSql::find_by_cancelled(k, false, conn) { - Ok(mut v) => { - v.delete(conn)?; - self.decrypt_if_necessary(&mut v)?; - Ok(Some(DbValue::CompletedTransaction(Box::new( - CompletedTransaction::try_from(v)?, - )))) - }, - Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { - Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction(k))) - }, - Err(e) => Err(e), + DbKey::CompletedTransaction(k) => { + conn.transaction::<_, _, _>(|| match CompletedTransactionSql::find_by_cancelled(k, false, conn) { + Ok(mut v) => { + v.delete(conn)?; + self.decrypt_if_necessary(&mut v)?; + Ok(Some(DbValue::CompletedTransaction(Box::new( + CompletedTransaction::try_from(v)?, + )))) + }, + Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { + Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction(k))) + }, + Err(e) => Err(e), + }) }, DbKey::PendingOutboundTransactions => Err(TransactionStorageError::OperationNotSupported), DbKey::PendingInboundTransactions => Err(TransactionStorageError::OperationNotSupported), @@ -169,7 +175,7 @@ impl TransactionServiceSqliteDatabase { DbKey::CancelledPendingInboundTransactions => Err(TransactionStorageError::OperationNotSupported), DbKey::CancelledCompletedTransactions => Err(TransactionStorageError::OperationNotSupported), DbKey::CancelledPendingOutboundTransaction(k) => { - match OutboundTransactionSql::find_by_cancelled(k, true, conn) { + conn.transaction::<_, _, _>(|| match OutboundTransactionSql::find_by_cancelled(k, true, conn) { Ok(mut v) => { v.delete(conn)?; self.decrypt_if_necessary(&mut v)?; @@ -181,10 +187,10 @@ impl TransactionServiceSqliteDatabase { TransactionStorageError::ValueNotFound(DbKey::CancelledPendingOutboundTransaction(k)), ), Err(e) => Err(e), - } + }) }, DbKey::CancelledPendingInboundTransaction(k) => { - match InboundTransactionSql::find_by_cancelled(k, true, conn) { + conn.transaction::<_, _, _>(|| match InboundTransactionSql::find_by_cancelled(k, true, conn) { Ok(mut v) => { v.delete(conn)?; self.decrypt_if_necessary(&mut v)?; @@ -196,7 +202,7 @@ impl TransactionServiceSqliteDatabase { TransactionStorageError::ValueNotFound(DbKey::CancelledPendingOutboundTransaction(k)), ), Err(e) => Err(e), - } + }) }, DbKey::AnyTransaction(_) => Err(TransactionStorageError::OperationNotSupported), } @@ -579,20 +585,22 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { return Err(TransactionStorageError::TransactionAlreadyExists); } - match OutboundTransactionSql::find_by_cancelled(tx_id, false, &conn) { - Ok(v) => { - let mut completed_tx_sql = CompletedTransactionSql::try_from(completed_transaction)?; - self.encrypt_if_necessary(&mut completed_tx_sql)?; - v.delete(&conn)?; - completed_tx_sql.commit(&conn)?; - }, - Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { - return Err(TransactionStorageError::ValueNotFound( - DbKey::PendingOutboundTransaction(tx_id), - )) - }, - Err(e) => return Err(e), - }; + let mut completed_tx_sql = CompletedTransactionSql::try_from(completed_transaction)?; + self.encrypt_if_necessary(&mut completed_tx_sql)?; + + conn.transaction::<_, _, _>(|| { + match OutboundTransactionSql::complete_outbound_transaction(tx_id, &conn) { + Ok(_) => completed_tx_sql.commit(&conn)?, + Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { + return Err(TransactionStorageError::ValueNotFound( + DbKey::PendingOutboundTransaction(tx_id), + )) + }, + Err(e) => return Err(e), + } + + Ok(()) + })?; if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -618,20 +626,22 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { return Err(TransactionStorageError::TransactionAlreadyExists); } - match InboundTransactionSql::find_by_cancelled(tx_id, false, &conn) { - Ok(v) => { - let mut completed_tx_sql = CompletedTransactionSql::try_from(completed_transaction)?; - self.encrypt_if_necessary(&mut completed_tx_sql)?; - v.delete(&conn)?; - completed_tx_sql.commit(&conn)?; - }, - Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { - return Err(TransactionStorageError::ValueNotFound( - DbKey::PendingInboundTransaction(tx_id), - )) - }, - Err(e) => return Err(e), - }; + let mut completed_tx_sql = CompletedTransactionSql::try_from(completed_transaction)?; + self.encrypt_if_necessary(&mut completed_tx_sql)?; + + conn.transaction::<_, _, _>(|| { + match InboundTransactionSql::complete_inbound_transaction(tx_id, &conn) { + Ok(_) => completed_tx_sql.commit(&conn)?, + Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { + return Err(TransactionStorageError::ValueNotFound( + DbKey::PendingInboundTransaction(tx_id), + )) + }, + Err(e) => return Err(e), + }; + + Ok(()) + })?; if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -649,25 +659,32 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - match CompletedTransactionSql::find_by_cancelled(tx_id, false, &conn) { - Ok(v) => { - if TransactionStatus::try_from(v.status)? == TransactionStatus::Completed { - v.update( - UpdateCompletedTransactionSql { - status: Some(TransactionStatus::Broadcast as i32), - ..Default::default() - }, - &conn, - )?; - } - }, - Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { - return Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction( - tx_id, - ))) - }, - Err(e) => return Err(e), - }; + conn.transaction::<_, _, _>(|| { + match CompletedTransactionSql::find_by_cancelled(tx_id, false, &conn) { + Ok(v) => { + // Note: This status test that does not error if the status do not match makes it inefficient + // to combine the 'find' and 'update' queries. + if TransactionStatus::try_from(v.status)? == TransactionStatus::Completed { + v.update( + UpdateCompletedTransactionSql { + status: Some(TransactionStatus::Broadcast as i32), + ..Default::default() + }, + &conn, + )?; + } + }, + Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { + return Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction( + tx_id, + ))) + }, + Err(e) => return Err(e), + } + + Ok(()) + })?; + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -688,17 +705,15 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - match CompletedTransactionSql::find_by_cancelled(tx_id, false, &conn) { - Ok(v) => { - v.reject(reason, &conn)?; - }, + match CompletedTransactionSql::reject_completed_transaction(tx_id, reason, &conn) { + Ok(_) => {}, Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { return Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction( tx_id, ))); }, Err(e) => return Err(e), - }; + } if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -719,22 +734,20 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - match InboundTransactionSql::find(tx_id, &conn) { - Ok(v) => { - v.set_cancelled(cancelled, &conn)?; - }, + + match InboundTransactionSql::find_and_set_cancelled(tx_id, cancelled, &conn) { + Ok(_) => {}, Err(_) => { - match OutboundTransactionSql::find(tx_id, &conn) { - Ok(v) => { - v.set_cancelled(cancelled, &conn)?; - }, + match OutboundTransactionSql::find_and_set_cancelled(tx_id, cancelled, &conn) { + Ok(_) => {}, Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { return Err(TransactionStorageError::ValuesNotFound); }, Err(e) => return Err(e), }; }, - }; + } + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -751,33 +764,12 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - match InboundTransactionSql::find_by_cancelled(tx_id, false, &conn) { - Ok(v) => { - v.update( - UpdateInboundTransactionSql { - cancelled: None, - direct_send_success: Some(1i32), - receiver_protocol: None, - send_count: None, - last_send_timestamp: None, - }, - &conn, - )?; - }, + + match InboundTransactionSql::mark_direct_send_success(tx_id, &conn) { + Ok(_) => {}, Err(_) => { - match OutboundTransactionSql::find_by_cancelled(tx_id, false, &conn) { - Ok(v) => { - v.update( - UpdateOutboundTransactionSql { - cancelled: None, - direct_send_success: Some(1i32), - sender_protocol: None, - send_count: None, - last_send_timestamp: None, - }, - &conn, - )?; - }, + match OutboundTransactionSql::mark_direct_send_success(tx_id, &conn) { + Ok(_) => {}, Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { return Err(TransactionStorageError::ValuesNotFound); }, @@ -785,6 +777,7 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { }; }, }; + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -808,55 +801,68 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - let mut inbound_txs = InboundTransactionSql::index(&conn)?; - // If the db is already encrypted then the very first output we try to encrypt will fail. - for tx in &mut inbound_txs { - // Test if this transaction is encrypted or not to avoid a double encryption. - let _inbound_transaction = InboundTransaction::try_from(tx.clone()).map_err(|_| { - error!( - target: LOG_TARGET, - "Could not convert Inbound Transaction from database version, it might already be encrypted" - ); - TransactionStorageError::AlreadyEncrypted - })?; - tx.encrypt(&cipher) - .map_err(|_| TransactionStorageError::AeadError("Encryption Error".to_string()))?; - tx.update_encryption(&conn)?; - } + conn.transaction::<_, TransactionStorageError, _>(|| { + let mut inbound_txs = InboundTransactionSql::index(&conn)?; + // If the db is already encrypted then the very first output we try to encrypt will fail. + for tx in &mut inbound_txs { + // Test if this transaction is encrypted or not to avoid a double encryption. + let _inbound_transaction = InboundTransaction::try_from(tx.clone()).map_err(|_| { + error!( + target: LOG_TARGET, + "Could not convert Inbound Transaction from database version, it might already be encrypted" + ); + TransactionStorageError::AlreadyEncrypted + })?; + tx.encrypt(&cipher) + .map_err(|_| TransactionStorageError::AeadError("Encryption Error".to_string()))?; + tx.update_encryption(&conn)?; + } - let mut outbound_txs = OutboundTransactionSql::index(&conn)?; - // If the db is already encrypted then the very first output we try to encrypt will fail. - for tx in &mut outbound_txs { - // Test if this transaction is encrypted or not to avoid a double encryption. - let _outbound_transaction = OutboundTransaction::try_from(tx.clone()).map_err(|_| { - error!( - target: LOG_TARGET, - "Could not convert Inbound Transaction from database version, it might already be encrypted" - ); - TransactionStorageError::AlreadyEncrypted - })?; - tx.encrypt(&cipher) - .map_err(|_| TransactionStorageError::AeadError("Encryption Error".to_string()))?; - tx.update_encryption(&conn)?; - } + Ok(()) + })?; + + conn.transaction::<_, TransactionStorageError, _>(|| { + let mut outbound_txs = OutboundTransactionSql::index(&conn)?; + // If the db is already encrypted then the very first output we try to encrypt will fail. + for tx in &mut outbound_txs { + // Test if this transaction is encrypted or not to avoid a double encryption. + let _outbound_transaction = OutboundTransaction::try_from(tx.clone()).map_err(|_| { + error!( + target: LOG_TARGET, + "Could not convert Inbound Transaction from database version, it might already be encrypted" + ); + TransactionStorageError::AlreadyEncrypted + })?; + tx.encrypt(&cipher) + .map_err(|_| TransactionStorageError::AeadError("Encryption Error".to_string()))?; + tx.update_encryption(&conn)?; + } - let mut completed_txs = CompletedTransactionSql::index(&conn)?; - // If the db is already encrypted then the very first output we try to encrypt will fail. - for tx in &mut completed_txs { - // Test if this transaction is encrypted or not to avoid a double encryption. - let _completed_transaction = CompletedTransaction::try_from(tx.clone()).map_err(|_| { - error!( - target: LOG_TARGET, - "Could not convert Inbound Transaction from database version, it might already be encrypted" - ); - TransactionStorageError::AlreadyEncrypted - })?; - tx.encrypt(&cipher) - .map_err(|_| TransactionStorageError::AeadError("Encryption Error".to_string()))?; - tx.update_encryption(&conn)?; - } + Ok(()) + })?; + + conn.transaction::<_, TransactionStorageError, _>(|| { + let mut completed_txs = CompletedTransactionSql::index(&conn)?; + // If the db is already encrypted then the very first output we try to encrypt will fail. + for tx in &mut completed_txs { + // Test if this transaction is encrypted or not to avoid a double encryption. + let _completed_transaction = CompletedTransaction::try_from(tx.clone()).map_err(|_| { + error!( + target: LOG_TARGET, + "Could not convert Inbound Transaction from database version, it might already be encrypted" + ); + TransactionStorageError::AlreadyEncrypted + })?; + tx.encrypt(&cipher) + .map_err(|_| TransactionStorageError::AeadError("Encryption Error".to_string()))?; + tx.update_encryption(&conn)?; + } + + Ok(()) + })?; (*current_cipher) = Some(cipher); + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -882,31 +888,44 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - let mut inbound_txs = InboundTransactionSql::index(&conn)?; + conn.transaction::<_, TransactionStorageError, _>(|| { + let mut inbound_txs = InboundTransactionSql::index(&conn)?; - for tx in &mut inbound_txs { - tx.decrypt(&cipher) - .map_err(|_| TransactionStorageError::AeadError("Decryption Error".to_string()))?; - tx.update_encryption(&conn)?; - } + for tx in &mut inbound_txs { + tx.decrypt(&cipher) + .map_err(|_| TransactionStorageError::AeadError("Decryption Error".to_string()))?; + tx.update_encryption(&conn)?; + } - let mut outbound_txs = OutboundTransactionSql::index(&conn)?; + Ok(()) + })?; - for tx in &mut outbound_txs { - tx.decrypt(&cipher) - .map_err(|_| TransactionStorageError::AeadError("Decryption Error".to_string()))?; - tx.update_encryption(&conn)?; - } + conn.transaction::<_, TransactionStorageError, _>(|| { + let mut outbound_txs = OutboundTransactionSql::index(&conn)?; - let mut completed_txs = CompletedTransactionSql::index(&conn)?; - for tx in &mut completed_txs { - tx.decrypt(&cipher) - .map_err(|_| TransactionStorageError::AeadError("Decryption Error".to_string()))?; - tx.update_encryption(&conn)?; - } + for tx in &mut outbound_txs { + tx.decrypt(&cipher) + .map_err(|_| TransactionStorageError::AeadError("Decryption Error".to_string()))?; + tx.update_encryption(&conn)?; + } + + Ok(()) + })?; + + conn.transaction::<_, TransactionStorageError, _>(|| { + let mut completed_txs = CompletedTransactionSql::index(&conn)?; + for tx in &mut completed_txs { + tx.decrypt(&cipher) + .map_err(|_| TransactionStorageError::AeadError("Decryption Error".to_string()))?; + tx.update_encryption(&conn)?; + } + + Ok(()) + })?; // Now that all the decryption has been completed we can safely remove the cipher fully std::mem::drop((*current_cipher).take()); + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -920,15 +939,16 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { Ok(()) } - fn cancel_coinbase_transaction_at_block_height(&self, block_height: u64) -> Result<(), TransactionStorageError> { + fn cancel_coinbase_transactions_at_block_height(&self, block_height: u64) -> Result<(), TransactionStorageError> { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - let coinbase_txs = CompletedTransactionSql::index_coinbase_at_block_height(block_height as i64, &conn)?; - for c in &coinbase_txs { - c.reject(TxCancellationReason::AbandonedCoinbase, &conn)?; - } + CompletedTransactionSql::reject_coinbases_at_block_height( + block_height as i64, + TxCancellationReason::AbandonedCoinbase, + &conn, + )?; if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -977,34 +997,13 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - if let Ok(tx) = CompletedTransactionSql::find(tx_id, &conn) { - let update = UpdateCompletedTransactionSql { - send_count: Some(tx.send_count + 1), - last_send_timestamp: Some(Some(Utc::now().naive_utc())), - ..Default::default() - }; - tx.update(update, &conn)?; - } else if let Ok(tx) = OutboundTransactionSql::find(tx_id, &conn) { - let update = UpdateOutboundTransactionSql { - cancelled: None, - direct_send_success: None, - sender_protocol: None, - send_count: Some(tx.send_count + 1), - last_send_timestamp: Some(Some(Utc::now().naive_utc())), - }; - tx.update(update, &conn)?; - } else if let Ok(tx) = InboundTransactionSql::find_by_cancelled(tx_id, false, &conn) { - let update = UpdateInboundTransactionSql { - cancelled: None, - direct_send_success: None, - receiver_protocol: None, - send_count: Some(tx.send_count + 1), - last_send_timestamp: Some(Some(Utc::now().naive_utc())), - }; - tx.update(update, &conn)?; - } else { + if CompletedTransactionSql::increment_send_count(tx_id, &conn).is_err() && + OutboundTransactionSql::increment_send_count(tx_id, &conn).is_err() && + InboundTransactionSql::increment_send_count(tx_id, &conn).is_err() + { return Err(TransactionStorageError::ValuesNotFound); } + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -1031,25 +1030,36 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - match CompletedTransactionSql::find(tx_id, &conn) { - Ok(v) => { - v.update_mined_height( - mined_height, - mined_in_block, - mined_timestamp, - num_confirmations, - is_confirmed, - &conn, - is_faux, - )?; - }, + let status = if is_confirmed { + if is_faux { + TransactionStatus::FauxConfirmed + } else { + TransactionStatus::MinedConfirmed + } + } else if is_faux { + TransactionStatus::FauxUnconfirmed + } else { + TransactionStatus::MinedUnconfirmed + }; + + match CompletedTransactionSql::update_mined_height( + tx_id, + num_confirmations, + status, + mined_height, + mined_in_block, + mined_timestamp, + &conn, + ) { + Ok(_) => {}, Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { return Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction( tx_id, ))); }, Err(e) => return Err(e), - }; + } + if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -1186,17 +1196,15 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { let start = Instant::now(); let conn = self.database_connection.get_pooled_connection()?; let acquire_lock = start.elapsed(); - match CompletedTransactionSql::find(tx_id, &conn) { - Ok(v) => { - v.set_as_unmined(&conn)?; - }, + match CompletedTransactionSql::set_as_unmined(tx_id, &conn) { + Ok(_) => {}, Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { return Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction( tx_id, ))); }, Err(e) => return Err(e), - }; + } if start.elapsed().as_millis() > 0 { trace!( target: LOG_TARGET, @@ -1285,10 +1293,8 @@ impl TransactionBackend for TransactionServiceSqliteDatabase { fn abandon_coinbase_transaction(&self, tx_id: TxId) -> Result<(), TransactionStorageError> { let conn = self.database_connection.get_pooled_connection()?; - match CompletedTransactionSql::find_by_cancelled(tx_id, false, &conn) { - Ok(tx) => { - tx.abandon_coinbase(&conn)?; - }, + match CompletedTransactionSql::find_and_abandon_coinbase(tx_id, &conn) { + Ok(_) => {}, Err(TransactionStorageError::DieselError(DieselError::NotFound)) => { return Err(TransactionStorageError::ValueNotFound(DbKey::CompletedTransaction( tx_id, @@ -1390,6 +1396,68 @@ impl InboundTransactionSql { .first::(conn)?) } + pub fn mark_direct_send_success(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + diesel::update( + inbound_transactions::table + .filter(inbound_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(inbound_transactions::cancelled.eq(i32::from(false))), + ) + .set(UpdateInboundTransactionSql { + cancelled: None, + direct_send_success: Some(1i32), + receiver_protocol: None, + send_count: None, + last_send_timestamp: None, + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + + pub fn complete_inbound_transaction(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + diesel::delete( + inbound_transactions::table + .filter(inbound_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(inbound_transactions::cancelled.eq(i32::from(false))), + ) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + + pub fn increment_send_count(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + diesel::update( + inbound_transactions::table + .filter(inbound_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(inbound_transactions::cancelled.eq(i32::from(false))), + ) + .set(UpdateInboundTransactionSql { + cancelled: None, + direct_send_success: None, + receiver_protocol: None, + send_count: Some( + if let Some(value) = inbound_transactions::table + .filter(inbound_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(inbound_transactions::cancelled.eq(i32::from(false))) + .select(inbound_transactions::send_count) + .load::(conn)? + .first() + { + value + 1 + } else { + return Err(TransactionStorageError::DieselError(DieselError::NotFound)); + }, + ), + last_send_timestamp: Some(Some(Utc::now().naive_utc())), + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + pub fn delete(&self, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { let num_deleted = diesel::delete(inbound_transactions::table.filter(inbound_transactions::tx_id.eq(&self.tx_id))) @@ -1421,17 +1489,23 @@ impl InboundTransactionSql { Ok(()) } - pub fn set_cancelled(&self, cancelled: bool, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { - self.update( - UpdateInboundTransactionSql { + pub fn find_and_set_cancelled( + tx_id: TxId, + cancelled: bool, + conn: &SqliteConnection, + ) -> Result<(), TransactionStorageError> { + diesel::update(inbound_transactions::table.filter(inbound_transactions::tx_id.eq(tx_id.as_u64() as i64))) + .set(UpdateInboundTransactionSql { cancelled: Some(i32::from(cancelled)), direct_send_success: None, receiver_protocol: None, send_count: None, last_send_timestamp: None, - }, - conn, - ) + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) } pub fn update_encryption(&self, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { @@ -1589,6 +1663,63 @@ impl OutboundTransactionSql { .first::(conn)?) } + pub fn mark_direct_send_success(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + diesel::update( + outbound_transactions::table + .filter(outbound_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(outbound_transactions::cancelled.eq(i32::from(false))), + ) + .set(UpdateOutboundTransactionSql { + cancelled: None, + direct_send_success: Some(1i32), + sender_protocol: None, + send_count: None, + last_send_timestamp: None, + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + + pub fn complete_outbound_transaction(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + diesel::delete( + outbound_transactions::table + .filter(outbound_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(outbound_transactions::cancelled.eq(i32::from(false))), + ) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + + pub fn increment_send_count(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + diesel::update(outbound_transactions::table.filter(outbound_transactions::tx_id.eq(tx_id.as_u64() as i64))) + .set(UpdateOutboundTransactionSql { + cancelled: None, + direct_send_success: None, + sender_protocol: None, + send_count: Some( + if let Some(value) = outbound_transactions::table + .filter(outbound_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .select(outbound_transactions::send_count) + .load::(conn)? + .first() + { + value + 1 + } else { + return Err(TransactionStorageError::DieselError(DieselError::NotFound)); + }, + ), + last_send_timestamp: Some(Some(Utc::now().naive_utc())), + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + pub fn delete(&self, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { diesel::delete(outbound_transactions::table.filter(outbound_transactions::tx_id.eq(&self.tx_id))) .execute(conn) @@ -1609,17 +1740,23 @@ impl OutboundTransactionSql { Ok(()) } - pub fn set_cancelled(&self, cancelled: bool, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { - self.update( - UpdateOutboundTransactionSql { + pub fn find_and_set_cancelled( + tx_id: TxId, + cancelled: bool, + conn: &SqliteConnection, + ) -> Result<(), TransactionStorageError> { + diesel::update(outbound_transactions::table.filter(outbound_transactions::tx_id.eq(tx_id.as_u64() as i64))) + .set(UpdateOutboundTransactionSql { cancelled: Some(i32::from(cancelled)), direct_send_success: None, sender_protocol: None, send_count: None, last_send_timestamp: None, - }, - conn, - ) + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) } pub fn update_encryption(&self, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { @@ -1823,6 +1960,23 @@ impl CompletedTransactionSql { .load::(conn)?) } + pub fn find_and_abandon_coinbase(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + let _ = diesel::update( + completed_transactions::table + .filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(completed_transactions::cancelled.is_null()) + .filter(completed_transactions::coinbase_block_height.is_not_null()), + ) + .set(UpdateCompletedTransactionSql { + cancelled: Some(Some(TxCancellationReason::AbandonedCoinbase as i32)), + ..Default::default() + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + pub fn find(tx_id: TxId, conn: &SqliteConnection) -> Result { Ok(completed_transactions::table .filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64)) @@ -1847,6 +2001,70 @@ impl CompletedTransactionSql { Ok(query.first::(conn)?) } + pub fn reject_completed_transaction( + tx_id: TxId, + reason: TxCancellationReason, + conn: &SqliteConnection, + ) -> Result<(), TransactionStorageError> { + diesel::update( + completed_transactions::table + .filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .filter(completed_transactions::cancelled.is_null()), + ) + .set(UpdateCompletedTransactionSql { + cancelled: Some(Some(reason as i32)), + status: Some(TransactionStatus::Rejected as i32), + ..Default::default() + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + + pub fn increment_send_count(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + // This query uses a sub-query to retrieve an existing value in the table + diesel::update(completed_transactions::table.filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64))) + .set(UpdateCompletedTransactionSql { + send_count: Some( + if let Some(value) = completed_transactions::table + .filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .select(completed_transactions::send_count) + .load::(conn)? + .first() + { + value + 1 + } else { + return Err(TransactionStorageError::DieselError(DieselError::NotFound)); + }, + ), + last_send_timestamp: Some(Some(Utc::now().naive_utc())), + ..Default::default() + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; + + Ok(()) + } + + pub fn reject_coinbases_at_block_height( + block_height: i64, + reason: TxCancellationReason, + conn: &SqliteConnection, + ) -> Result { + Ok(diesel::update( + completed_transactions::table + .filter(completed_transactions::status.eq(TransactionStatus::Coinbase as i32)) + .filter(completed_transactions::coinbase_block_height.eq(block_height)), + ) + .set(UpdateCompletedTransactionSql { + cancelled: Some(Some(reason as i32)), + status: Some(TransactionStatus::Rejected as i32), + ..Default::default() + }) + .execute(conn)?) + } + pub fn delete(&self, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { let num_deleted = diesel::delete(completed_transactions::table.filter(completed_transactions::tx_id.eq(&self.tx_id))) @@ -1871,58 +2089,70 @@ impl CompletedTransactionSql { Ok(()) } - pub fn reject(&self, reason: TxCancellationReason, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { - self.update( - UpdateCompletedTransactionSql { - cancelled: Some(Some(reason as i32)), - status: Some(TransactionStatus::Rejected as i32), - ..Default::default() - }, - conn, - )?; - - Ok(()) - } - - pub fn abandon_coinbase(&self, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { - if self.coinbase_block_height.is_none() { - return Err(TransactionStorageError::NotCoinbase); - } - - self.update( - UpdateCompletedTransactionSql { - cancelled: Some(Some(TxCancellationReason::AbandonedCoinbase as i32)), + pub fn update_mined_height( + tx_id: TxId, + num_confirmations: u64, + status: TransactionStatus, + mined_height: u64, + mined_in_block: BlockHash, + mined_timestamp: u64, + conn: &SqliteConnection, + ) -> Result<(), TransactionStorageError> { + diesel::update(completed_transactions::table.filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64))) + .set(UpdateCompletedTransactionSql { + confirmations: Some(Some(num_confirmations as i64)), + status: Some(status as i32), + mined_height: Some(Some(mined_height as i64)), + mined_in_block: Some(Some(mined_in_block.to_vec())), + mined_timestamp: Some(NaiveDateTime::from_timestamp(mined_timestamp as i64, 0)), + // If the tx is mined, then it can't be cancelled + cancelled: None, ..Default::default() - }, - conn, - )?; + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; Ok(()) } - pub fn set_as_unmined(&self, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { - let status = if self.coinbase_block_height.is_some() { - Some(TransactionStatus::Coinbase as i32) - } else if self.status == TransactionStatus::FauxConfirmed as i32 { - Some(TransactionStatus::FauxUnconfirmed as i32) - } else if self.status == TransactionStatus::Broadcast as i32 { - Some(TransactionStatus::Broadcast as i32) - } else { - Some(TransactionStatus::Completed as i32) - }; - - self.update( - UpdateCompletedTransactionSql { - status, + pub fn set_as_unmined(tx_id: TxId, conn: &SqliteConnection) -> Result<(), TransactionStorageError> { + // This query uses two sub-queries to retrieve existing values in the table + diesel::update(completed_transactions::table.filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64))) + .set(UpdateCompletedTransactionSql { + status: { + if let Some(Some(_coinbase_block_height)) = completed_transactions::table + .filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .select(completed_transactions::coinbase_block_height) + .load::>(conn)? + .first() + { + Some(TransactionStatus::Coinbase as i32) + } else if let Some(status) = completed_transactions::table + .filter(completed_transactions::tx_id.eq(tx_id.as_u64() as i64)) + .select(completed_transactions::status) + .load::(conn)? + .first() + { + if *status == TransactionStatus::FauxConfirmed as i32 { + Some(TransactionStatus::FauxUnconfirmed as i32) + } else if *status == TransactionStatus::Broadcast as i32 { + Some(TransactionStatus::Broadcast as i32) + } else { + Some(TransactionStatus::Completed as i32) + } + } else { + return Err(TransactionStorageError::DieselError(DieselError::NotFound)); + } + }, mined_in_block: Some(None), mined_height: Some(None), confirmations: Some(None), // Turns out it should not be cancelled cancelled: Some(None), ..Default::default() - }, - conn, - )?; + }) + .execute(conn) + .num_rows_affected_or_not_found(1)?; // Ideally the outputs should be marked unmined here as well, but because of the separation of classes, // that will be done in the outputs service. @@ -1941,45 +2171,6 @@ impl CompletedTransactionSql { Ok(()) } - - pub fn update_mined_height( - &self, - mined_height: u64, - mined_in_block: BlockHash, - mined_timestamp: u64, - num_confirmations: u64, - is_confirmed: bool, - conn: &SqliteConnection, - is_faux: bool, - ) -> Result<(), TransactionStorageError> { - let status = if is_confirmed { - if is_faux { - TransactionStatus::FauxConfirmed as i32 - } else { - TransactionStatus::MinedConfirmed as i32 - } - } else if is_faux { - TransactionStatus::FauxUnconfirmed as i32 - } else { - TransactionStatus::MinedUnconfirmed as i32 - }; - - self.update( - UpdateCompletedTransactionSql { - confirmations: Some(Some(num_confirmations as i64)), - status: Some(status), - mined_height: Some(Some(mined_height as i64)), - mined_in_block: Some(Some(mined_in_block.to_vec())), - mined_timestamp: Some(NaiveDateTime::from_timestamp(mined_timestamp as i64, 0)), - // If the tx is mined, then it can't be cancelled - cancelled: None, - ..Default::default() - }, - conn, - )?; - - Ok(()) - } } impl Encryptable for CompletedTransactionSql { @@ -2240,6 +2431,7 @@ mod test { InboundTransactionSql, OutboundTransactionSql, TransactionServiceSqliteDatabase, + UpdateCompletedTransactionSql, }, }, util::encryption::Encryptable, @@ -2517,16 +2709,10 @@ mod test { .unwrap(); assert!(InboundTransactionSql::find_by_cancelled(inbound_tx1.tx_id, true, &conn).is_err()); - InboundTransactionSql::try_from(inbound_tx1.clone()) - .unwrap() - .set_cancelled(true, &conn) - .unwrap(); + InboundTransactionSql::find_and_set_cancelled(inbound_tx1.tx_id, true, &conn).unwrap(); assert!(InboundTransactionSql::find_by_cancelled(inbound_tx1.tx_id, false, &conn).is_err()); assert!(InboundTransactionSql::find_by_cancelled(inbound_tx1.tx_id, true, &conn).is_ok()); - InboundTransactionSql::try_from(inbound_tx1.clone()) - .unwrap() - .set_cancelled(false, &conn) - .unwrap(); + InboundTransactionSql::find_and_set_cancelled(inbound_tx1.tx_id, false, &conn).unwrap(); assert!(InboundTransactionSql::find_by_cancelled(inbound_tx1.tx_id, true, &conn).is_err()); assert!(InboundTransactionSql::find_by_cancelled(inbound_tx1.tx_id, false, &conn).is_ok()); OutboundTransactionSql::try_from(outbound_tx1.clone()) @@ -2535,16 +2721,10 @@ mod test { .unwrap(); assert!(OutboundTransactionSql::find_by_cancelled(outbound_tx1.tx_id, true, &conn).is_err()); - OutboundTransactionSql::try_from(outbound_tx1.clone()) - .unwrap() - .set_cancelled(true, &conn) - .unwrap(); + OutboundTransactionSql::find_and_set_cancelled(outbound_tx1.tx_id, true, &conn).unwrap(); assert!(OutboundTransactionSql::find_by_cancelled(outbound_tx1.tx_id, false, &conn).is_err()); assert!(OutboundTransactionSql::find_by_cancelled(outbound_tx1.tx_id, true, &conn).is_ok()); - OutboundTransactionSql::try_from(outbound_tx1.clone()) - .unwrap() - .set_cancelled(false, &conn) - .unwrap(); + OutboundTransactionSql::find_and_set_cancelled(outbound_tx1.tx_id, false, &conn).unwrap(); assert!(OutboundTransactionSql::find_by_cancelled(outbound_tx1.tx_id, true, &conn).is_err()); assert!(OutboundTransactionSql::find_by_cancelled(outbound_tx1.tx_id, false, &conn).is_ok()); @@ -2556,7 +2736,14 @@ mod test { assert!(CompletedTransactionSql::find_by_cancelled(completed_tx1.tx_id, true, &conn).is_err()); CompletedTransactionSql::try_from(completed_tx1.clone()) .unwrap() - .reject(TxCancellationReason::Unknown, &conn) + .update( + UpdateCompletedTransactionSql { + cancelled: Some(Some(TxCancellationReason::Unknown as i32)), + status: Some(TransactionStatus::Rejected as i32), + ..Default::default() + }, + &conn, + ) .unwrap(); assert!(CompletedTransactionSql::find_by_cancelled(completed_tx1.tx_id, false, &conn).is_err()); assert!(CompletedTransactionSql::find_by_cancelled(completed_tx1.tx_id, true, &conn).is_ok()); diff --git a/base_layer/wallet/tests/contacts_service.rs b/base_layer/wallet/tests/contacts_service.rs index e31f5e5cd4..a37dba5e1c 100644 --- a/base_layer/wallet/tests/contacts_service.rs +++ b/base_layer/wallet/tests/contacts_service.rs @@ -98,6 +98,7 @@ pub fn setup_contacts_service( user_agent: "tari/test-wallet".to_string(), rpc_max_simultaneous_sessions: 0, rpc_max_sessions_per_peer: 0, + listener_liveness_check_interval: None, }; let peer_message_subscription_factory = Arc::new(subscription_factory); let shutdown = Shutdown::new(); diff --git a/base_layer/wallet/tests/wallet.rs b/base_layer/wallet/tests/wallet.rs index a0cae8e830..0041d92b90 100644 --- a/base_layer/wallet/tests/wallet.rs +++ b/base_layer/wallet/tests/wallet.rs @@ -145,6 +145,7 @@ async fn create_wallet( auxiliary_tcp_listener_address: None, rpc_max_simultaneous_sessions: 0, rpc_max_sessions_per_peer: 0, + listener_liveness_check_interval: None, }; let sql_database_path = comms_config @@ -679,6 +680,7 @@ async fn test_import_utxo() { auxiliary_tcp_listener_address: None, rpc_max_simultaneous_sessions: 0, rpc_max_sessions_per_peer: 0, + listener_liveness_check_interval: None, }; let config = WalletConfig { p2p: comms_config, diff --git a/base_layer/wallet_ffi/Cargo.toml b/base_layer/wallet_ffi/Cargo.toml index 66bc653af3..1ce077c8bc 100644 --- a/base_layer/wallet_ffi/Cargo.toml +++ b/base_layer/wallet_ffi/Cargo.toml @@ -3,7 +3,7 @@ name = "tari_wallet_ffi" authors = ["The Tari Development Community"] description = "Tari cryptocurrency wallet C FFI bindings" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/base_layer/wallet_ffi/src/lib.rs b/base_layer/wallet_ffi/src/lib.rs index 0b31851f5a..c0679ec359 100644 --- a/base_layer/wallet_ffi/src/lib.rs +++ b/base_layer/wallet_ffi/src/lib.rs @@ -3919,6 +3919,7 @@ pub unsafe extern "C" fn comms_config_create( user_agent: format!("tari/mobile_wallet/{}", env!("CARGO_PKG_VERSION")), rpc_max_simultaneous_sessions: 0, rpc_max_sessions_per_peer: 0, + listener_liveness_check_interval: None, }; Box::into_raw(Box::new(config)) diff --git a/changelog.md b/changelog.md index 1c8d231881..16b56371ef 100644 --- a/changelog.md +++ b/changelog.md @@ -2,6 +2,54 @@ All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines. +### [0.38.7](https://github.com/tari-project/tari/compare/v0.38.6...v0.38.7) (2022-10-11) + + +### Bug Fixes + +* **core:** only resize db if migration is required ([#4792](https://github.com/tari-project/tari/issues/4792)) ([4811a57](https://github.com/tari-project/tari/commit/4811a5772665af4e3b9007ccadedfc651e1d232e)) +* **miner:** clippy error ([#4793](https://github.com/tari-project/tari/issues/4793)) ([734db22](https://github.com/tari-project/tari/commit/734db22bbdd36b5371aa9c70f4342bb0d3c2f3a4)) + +### [0.38.6](https://github.com/tari-project/tari/compare/v0.38.5...v0.38.6) (2022-10-11) + + +### Features + +* **base-node:** add client connection count to status line ([#4774](https://github.com/tari-project/tari/issues/4774)) ([8339b1d](https://github.com/tari-project/tari/commit/8339b1de1bace96671d8eba0cf309adb9f78014a)) +* move nonce to first in sha hash ([#4778](https://github.com/tari-project/tari/issues/4778)) ([054a314](https://github.com/tari-project/tari/commit/054a314f015ab7a3f1e571f3ee0c7a58ad0ebb5a)) +* remove dalek ng ([#4769](https://github.com/tari-project/tari/issues/4769)) ([953b0b7](https://github.com/tari-project/tari/commit/953b0b7cfc371467e7d15e933e79c8d07712f666)) + + +### Bug Fixes + +* batch rewind operations ([#4752](https://github.com/tari-project/tari/issues/4752)) ([79d3c47](https://github.com/tari-project/tari/commit/79d3c47a86bc37be0117b33c869f9e04df068384)) +* **ci:** fix client path for nodejs ([#4765](https://github.com/tari-project/tari/issues/4765)) ([c7b5e68](https://github.com/tari-project/tari/commit/c7b5e68b400c79040f2dd92ee1cc779224e463ee)) +* **core:** only resize db if migration is required ([#4792](https://github.com/tari-project/tari/issues/4792)) ([4811a57](https://github.com/tari-project/tari/commit/4811a5772665af4e3b9007ccadedfc651e1d232e)) +* **dht:** remove some invalid saf failure cases ([#4787](https://github.com/tari-project/tari/issues/4787)) ([86b4d94](https://github.com/tari-project/tari/commit/86b4d9437f87cb31ed922ff7a7dc73e7fe29eb69)) +* fix config.toml bug ([#4780](https://github.com/tari-project/tari/issues/4780)) ([f6043c1](https://github.com/tari-project/tari/commit/f6043c1f03f33a34e2612516ffca8a589e319001)) +* **miner:** clippy error ([#4793](https://github.com/tari-project/tari/issues/4793)) ([734db22](https://github.com/tari-project/tari/commit/734db22bbdd36b5371aa9c70f4342bb0d3c2f3a4)) +* **p2p/liveness:** remove fallible unwrap ([#4784](https://github.com/tari-project/tari/issues/4784)) ([e59be99](https://github.com/tari-project/tari/commit/e59be99401fc4b50f1b4f5a6a16948959e5c56a1)) +* **tari-script:** use tari script encoding for execution stack serde de/serialization ([#4791](https://github.com/tari-project/tari/issues/4791)) ([c62f7eb](https://github.com/tari-project/tari/commit/c62f7eb6c5b6b4336c7351bd89cb3a700fde1bb2)) + +### [0.38.6](https://github.com/tari-project/tari/compare/v0.38.5...v0.38.6) (2022-10-11) + + +### Features + +* **base-node:** add client connection count to status line ([#4774](https://github.com/tari-project/tari/issues/4774)) ([8339b1d](https://github.com/tari-project/tari/commit/8339b1de1bace96671d8eba0cf309adb9f78014a)) +* move nonce to first in sha hash ([#4778](https://github.com/tari-project/tari/issues/4778)) ([054a314](https://github.com/tari-project/tari/commit/054a314f015ab7a3f1e571f3ee0c7a58ad0ebb5a)) +* remove dalek ng ([#4769](https://github.com/tari-project/tari/issues/4769)) ([953b0b7](https://github.com/tari-project/tari/commit/953b0b7cfc371467e7d15e933e79c8d07712f666)) + + +### Bug Fixes + +* batch rewind operations ([#4752](https://github.com/tari-project/tari/issues/4752)) ([79d3c47](https://github.com/tari-project/tari/commit/79d3c47a86bc37be0117b33c869f9e04df068384)) +* **ci:** fix client path for nodejs ([#4765](https://github.com/tari-project/tari/issues/4765)) ([c7b5e68](https://github.com/tari-project/tari/commit/c7b5e68b400c79040f2dd92ee1cc779224e463ee)) +* **dht:** remove some invalid saf failure cases ([#4787](https://github.com/tari-project/tari/issues/4787)) ([86b4d94](https://github.com/tari-project/tari/commit/86b4d9437f87cb31ed922ff7a7dc73e7fe29eb69)) +* fix config.toml bug ([#4780](https://github.com/tari-project/tari/issues/4780)) ([f6043c1](https://github.com/tari-project/tari/commit/f6043c1f03f33a34e2612516ffca8a589e319001)) +* **p2p/liveness:** remove fallible unwrap ([#4784](https://github.com/tari-project/tari/issues/4784)) ([e59be99](https://github.com/tari-project/tari/commit/e59be99401fc4b50f1b4f5a6a16948959e5c56a1)) +* **tari-script:** use tari script encoding for execution stack serde de/serialization ([#4791](https://github.com/tari-project/tari/issues/4791)) ([c62f7eb](https://github.com/tari-project/tari/commit/c62f7eb6c5b6b4336c7351bd89cb3a700fde1bb2)) + ### [0.38.5](https://github.com/tari-project/tari/compare/v0.38.4...v0.38.5) (2022-10-03) diff --git a/common/Cargo.toml b/common/Cargo.toml index 61350b5cb9..9bdb4ee1d8 100644 --- a/common/Cargo.toml +++ b/common/Cargo.toml @@ -6,7 +6,7 @@ repository = "https://github.com/tari-project/tari" homepage = "https://tari.com" readme = "README.md" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [features] diff --git a/common/config/presets/c_base_node.toml b/common/config/presets/c_base_node.toml index 26a5ae09c1..ca3c7aa19c 100644 --- a/common/config/presets/c_base_node.toml +++ b/common/config/presets/c_base_node.toml @@ -118,7 +118,7 @@ track_reorgs = true # The maximum number of transactions to sync in a single sync session Default: 10_000 #service.initial_sync_max_transactions = 10_000 # The maximum number of blocks added via sync or re-org to triggering a sync -#block_sync_trigger = 5 +#service.block_sync_trigger = 5 [base_node.state_machine] # The initial max sync latency. If a peer fails to stream a header/block within this deadline another sync peer will be @@ -178,6 +178,8 @@ track_reorgs = true # CIDR for addresses allowed to enter into liveness check mode on the listener. #listener_liveness_allowlist_cidrs = [] +# Enables periodic socket-level liveness checks. Default: Disabled +listener_liveness_check_interval = 15 # User agent string for this node #user_agent = "" diff --git a/common/config/presets/d_console_wallet.toml b/common/config/presets/d_console_wallet.toml index a44929a546..c479f95f75 100644 --- a/common/config/presets/d_console_wallet.toml +++ b/common/config/presets/d_console_wallet.toml @@ -201,6 +201,8 @@ event_channel_size = 3500 # CIDR for addresses allowed to enter into liveness check mode on the listener. #listener_liveness_allowlist_cidrs = [] +# Enables periodic socket-level liveness checks. Default: Disabled +# listener_liveness_check_interval = 15 # User agent string for this node #user_agent = "" diff --git a/common_sqlite/Cargo.toml b/common_sqlite/Cargo.toml index 0101cba1bf..55213455e9 100644 --- a/common_sqlite/Cargo.toml +++ b/common_sqlite/Cargo.toml @@ -3,7 +3,7 @@ name = "tari_common_sqlite" authors = ["The Tari Development Community"] description = "Tari cryptocurrency wallet library" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html diff --git a/comms/core/Cargo.toml b/comms/core/Cargo.toml index bfab6a6a11..c1684e6b47 100644 --- a/comms/core/Cargo.toml +++ b/comms/core/Cargo.toml @@ -6,7 +6,7 @@ repository = "https://github.com/tari-project/tari" homepage = "https://tari.com" readme = "README.md" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/comms/core/src/builder/comms_node.rs b/comms/core/src/builder/comms_node.rs index 48e2da083f..1279e97d1b 100644 --- a/comms/core/src/builder/comms_node.rs +++ b/comms/core/src/builder/comms_node.rs @@ -20,7 +20,7 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use std::{iter, sync::Arc}; +use std::{iter, sync::Arc, time::Duration}; use log::*; use tari_shutdown::ShutdownSignal; @@ -125,6 +125,12 @@ impl UnspawnedCommsNode { self } + /// Set to true to enable self liveness checking for the configured public address + pub fn set_liveness_check(mut self, interval: Option) -> Self { + self.builder = self.builder.set_liveness_check(interval); + self + } + /// Spawn a new node using the specified [Transport](crate::transports::Transport). pub async fn spawn_with_transport(self, transport: TTransport) -> Result where @@ -317,6 +323,11 @@ impl CommsNode { self.listening_info.bind_address() } + /// Return [ListenerInfo] + pub fn listening_info(&self) -> &ListenerInfo { + &self.listening_info + } + /// Return the Ip/Tcp address that this node is listening on pub fn hidden_service(&self) -> Option<&tor::HiddenService> { self.hidden_service.as_ref() diff --git a/comms/core/src/builder/mod.rs b/comms/core/src/builder/mod.rs index 4d4695859c..1975665809 100644 --- a/comms/core/src/builder/mod.rs +++ b/comms/core/src/builder/mod.rs @@ -265,6 +265,12 @@ impl CommsBuilder { self } + /// Enable and set interval for self-liveness checks, or None to disable it (default) + pub fn set_liveness_check(mut self, check_interval: Option) -> Self { + self.connection_manager_config.liveness_self_check_interval = check_interval; + self + } + fn make_peer_manager(&mut self) -> Result, CommsBuilderError> { let file_lock = self.peer_storage_file_lock.take(); diff --git a/comms/core/src/connection_manager/dialer.rs b/comms/core/src/connection_manager/dialer.rs index 195139762d..c3154816b8 100644 --- a/comms/core/src/connection_manager/dialer.rs +++ b/comms/core/src/connection_manager/dialer.rs @@ -531,31 +531,32 @@ where dial_state.peer().node_id.short_str() ); - let dial_fut = async move { - let mut socket = transport.dial(address.clone()).await.map_err(|err| { - ConnectionManagerError::TransportError { - address: address.to_string(), - details: err.to_string(), - } - })?; - debug!( - target: LOG_TARGET, - "Socket established on '{}'. Performing noise upgrade protocol", address - ); - - socket - .write(&[network_byte]) + let dial_fut = + async move { + let mut socket = transport.dial(address).await.map_err(|err| { + ConnectionManagerError::TransportError { + address: address.to_string(), + details: err.to_string(), + } + })?; + debug!( + target: LOG_TARGET, + "Socket established on '{}'. Performing noise upgrade protocol", address + ); + + socket + .write(&[network_byte]) + .await + .map_err(|_| ConnectionManagerError::WireFormatSendFailed)?; + + let noise_socket = time::timeout( + Duration::from_secs(40), + noise_config.upgrade_socket(socket, ConnectionDirection::Outbound), + ) .await - .map_err(|_| ConnectionManagerError::WireFormatSendFailed)?; - - let noise_socket = time::timeout( - Duration::from_secs(40), - noise_config.upgrade_socket(socket, ConnectionDirection::Outbound), - ) - .await - .map_err(|_| ConnectionManagerError::NoiseProtocolTimeout)??; - Result::<_, ConnectionManagerError>::Ok(noise_socket) - }; + .map_err(|_| ConnectionManagerError::NoiseProtocolTimeout)??; + Result::<_, ConnectionManagerError>::Ok(noise_socket) + }; pin_mut!(dial_fut); let either = future::select(dial_fut, cancel_signal.clone()).await; diff --git a/comms/core/src/connection_manager/listener.rs b/comms/core/src/connection_manager/listener.rs index bf58dddbf8..3df50b8696 100644 --- a/comms/core/src/connection_manager/listener.rs +++ b/comms/core/src/connection_manager/listener.rs @@ -36,7 +36,7 @@ use log::*; use tari_shutdown::{oneshot_trigger, oneshot_trigger::OneshotTrigger, ShutdownSignal}; use tokio::{ io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}, - sync::mpsc, + sync::{mpsc, watch}, time, }; use tokio_stream::StreamExt; @@ -53,7 +53,7 @@ use super::{ use crate::{ bounded_executor::BoundedExecutor, connection_manager::{ - liveness::LivenessSession, + liveness::{LivenessCheck, LivenessSession, LivenessStatus}, metrics, wire_mode::{WireMode, LIVENESS_WIRE_MODE}, }, @@ -83,12 +83,12 @@ pub struct PeerListener { node_identity: Arc, our_supported_protocols: Vec, liveness_session_count: Arc, - on_listening: OneshotTrigger>, + on_listening: OneshotTrigger), ConnectionManagerError>>, } impl PeerListener where - TTransport: Transport + Send + Sync + 'static, + TTransport: Transport + Clone + Send + Sync + 'static, TTransport::Output: AsyncRead + AsyncWrite + Send + Unpin + 'static, { pub fn new( @@ -121,7 +121,10 @@ where /// in binding the listener socket // This returns an impl Future and is not async because we want to exclude &self from the future so that it has a // 'static lifetime as well as to flatten the oneshot result for ergonomics - pub fn on_listening(&self) -> impl Future> + 'static { + pub fn on_listening( + &self, + ) -> impl Future), ConnectionManagerError>> + 'static + { let signal = self.on_listening.to_signal(); signal.map(|r| r.ok_or(ConnectionManagerError::ListenerOneshotCancelled)?) } @@ -132,7 +135,7 @@ where self } - pub async fn listen(self) -> Result { + pub async fn listen(self) -> Result<(Multiaddr, watch::Receiver), ConnectionManagerError> { let on_listening = self.on_listening(); runtime::current().spawn(self.run()); on_listening.await @@ -145,7 +148,9 @@ where Ok((mut inbound, address)) => { info!(target: LOG_TARGET, "Listening for peer connections on '{}'", address); - self.on_listening.broadcast(Ok(address)); + let liveness_watch = self.spawn_liveness_check(); + + self.on_listening.broadcast(Ok((address, liveness_watch))); loop { tokio::select! { @@ -229,6 +234,21 @@ where }); } + fn spawn_liveness_check(&self) -> watch::Receiver { + match self.config.liveness_self_check_interval { + Some(interval) => LivenessCheck::spawn( + self.transport.clone(), + self.node_identity.public_address(), + interval, + self.shutdown_signal.clone(), + ), + None => { + let (_, rx) = watch::channel(LivenessStatus::Disabled); + rx + }, + } + } + async fn spawn_listen_task(&self, mut socket: TTransport::Output, peer_addr: Multiaddr) { let node_identity = self.node_identity.clone(); let peer_manager = self.peer_manager.clone(); @@ -295,8 +315,9 @@ where let _result = socket.shutdown().await; }, Ok(WireMode::Liveness) => { - if liveness_session_count.load(Ordering::SeqCst) > 0 && - Self::is_address_in_liveness_cidr_range(&peer_addr, &config.liveness_cidr_allowlist) + if config.liveness_self_check_interval.is_some() || + (liveness_session_count.load(Ordering::SeqCst) > 0 && + Self::is_address_in_liveness_cidr_range(&peer_addr, &config.liveness_cidr_allowlist)) { debug!( target: LOG_TARGET, @@ -430,7 +451,7 @@ where let bind_address = self.bind_address.clone(); debug!(target: LOG_TARGET, "Attempting to listen on {}", bind_address); self.transport - .listen(bind_address.clone()) + .listen(&bind_address) .await .map_err(|err| ConnectionManagerError::ListenerError { address: bind_address.to_string(), diff --git a/comms/core/src/connection_manager/liveness.rs b/comms/core/src/connection_manager/liveness.rs index 39870dcf60..cf73f2a06f 100644 --- a/comms/core/src/connection_manager/liveness.rs +++ b/comms/core/src/connection_manager/liveness.rs @@ -20,14 +20,27 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use std::future::Future; +use std::{ + future::Future, + time::{Duration, Instant}, +}; -use futures::StreamExt; -use tokio::io::{AsyncRead, AsyncWrite}; +use futures::{future, SinkExt, StreamExt}; +use log::*; +use multiaddr::Multiaddr; +use tari_shutdown::ShutdownSignal; +use tokio::{ + io::{AsyncRead, AsyncWrite, AsyncWriteExt}, + sync::watch, + time, +}; use tokio_util::codec::{Framed, LinesCodec, LinesCodecError}; +use crate::{connection_manager::wire_mode::WireMode, transports::Transport}; + /// Max line length accepted by the liveness session. const MAX_LINE_LENGTH: usize = 50; +const LOG_TARGET: &str = "comms::connection_manager::liveness"; /// Echo server for liveness checks pub struct LivenessSession { @@ -49,6 +62,120 @@ where TSocket: AsyncRead + AsyncWrite + Unpin } } +#[derive(Debug, Clone, Copy)] +pub enum LivenessStatus { + Disabled, + Checking, + Unreachable, + Live(Duration), +} + +pub struct LivenessCheck { + transport: TTransport, + address: Multiaddr, + interval: Duration, + tx_watch: watch::Sender, + shutdown_signal: ShutdownSignal, +} + +impl LivenessCheck +where + TTransport: Transport + Send + Sync + 'static, + TTransport::Output: AsyncRead + AsyncWrite + Unpin + Send, +{ + pub fn spawn( + transport: TTransport, + address: Multiaddr, + interval: Duration, + shutdown_signal: ShutdownSignal, + ) -> watch::Receiver { + let (tx_watch, rx_watch) = watch::channel(LivenessStatus::Checking); + let check = Self { + transport, + address, + interval, + tx_watch, + shutdown_signal, + }; + tokio::spawn(check.run_until_shutdown()); + rx_watch + } + + pub async fn run_until_shutdown(self) { + let shutdown_signal = self.shutdown_signal.clone(); + let run_fut = self.run(); + tokio::pin!(run_fut); + future::select(run_fut, shutdown_signal).await; + } + + pub async fn run(mut self) { + info!( + target: LOG_TARGET, + "🔌️ Starting liveness self-check with interval {:.2?}", self.interval + ); + loop { + let timer = Instant::now(); + let _ = self.tx_watch.send(LivenessStatus::Checking); + match self.transport.dial(&self.address).await { + Ok(mut socket) => { + info!(target: LOG_TARGET, "🔌 liveness dial took {:.2?}", timer.elapsed()); + if let Err(err) = socket.write(&[WireMode::Liveness.as_byte()]).await { + warn!(target: LOG_TARGET, "🔌️ liveness failed to write byte: {}", err); + self.tx_watch.send_replace(LivenessStatus::Unreachable); + continue; + } + let mut framed = Framed::new(socket, LinesCodec::new_with_max_length(MAX_LINE_LENGTH)); + loop { + match self.ping_pong(&mut framed).await { + Ok(Some(latency)) => { + info!(target: LOG_TARGET, "⚡️️ liveness check latency {:.2?}", latency); + self.tx_watch.send_replace(LivenessStatus::Live(latency)); + }, + Ok(None) => { + info!(target: LOG_TARGET, "🔌️ liveness connection closed"); + self.tx_watch.send_replace(LivenessStatus::Unreachable); + break; + }, + Err(err) => { + warn!(target: LOG_TARGET, "🔌️ ping pong failed: {}", err); + self.tx_watch.send_replace(LivenessStatus::Unreachable); + // let _ = framed.close().await; + break; + }, + } + + time::sleep(self.interval).await; + } + }, + Err(err) => { + self.tx_watch.send_replace(LivenessStatus::Unreachable); + warn!( + target: LOG_TARGET, + "🔌️ Failed to dial public address for self check: {}", err + ); + }, + } + time::sleep(self.interval).await; + } + } + + async fn ping_pong( + &mut self, + framed: &mut Framed, + ) -> Result, LinesCodecError> { + let timer = Instant::now(); + framed.send("pingpong".to_string()).await?; + match framed.next().await { + Some(res) => { + let val = res?; + debug!(target: LOG_TARGET, "Received: {}", val); + Ok(Some(timer.elapsed())) + }, + None => Ok(None), + } + } +} + #[cfg(test)] mod test { use futures::SinkExt; diff --git a/comms/core/src/connection_manager/manager.rs b/comms/core/src/connection_manager/manager.rs index ecadf7109c..6e492ff187 100644 --- a/comms/core/src/connection_manager/manager.rs +++ b/comms/core/src/connection_manager/manager.rs @@ -28,7 +28,7 @@ use tari_shutdown::{Shutdown, ShutdownSignal}; use time::Duration; use tokio::{ io::{AsyncRead, AsyncWrite}, - sync::{broadcast, mpsc, oneshot}, + sync::{broadcast, mpsc, oneshot, watch}, task, time, }; @@ -43,7 +43,7 @@ use super::{ }; use crate::{ backoff::Backoff, - connection_manager::{metrics, ConnectionDirection, ConnectionId}, + connection_manager::{liveness::LivenessStatus, metrics, ConnectionDirection, ConnectionId}, multiplexing::Substream, noise::NoiseConfig, peer_manager::{NodeId, NodeIdentity, PeerManagerError}, @@ -111,6 +111,8 @@ pub struct ConnectionManagerConfig { pub liveness_max_sessions: usize, /// CIDR blocks that allowlist liveness checks. Default: Localhost only (127.0.0.1/32) pub liveness_cidr_allowlist: Vec, + /// Interval to perform self-liveness ping-pong tests. Default: None/disabled + pub liveness_self_check_interval: Option, /// If set, an additional TCP-only p2p listener will be started. This is useful for local wallet connections. /// Default: None (disabled) pub auxiliary_tcp_listener_address: Option, @@ -133,9 +135,10 @@ impl Default for ConnectionManagerConfig { // This must always be true for internal crate tests #[cfg(test)] allow_test_addresses: true, - liveness_max_sessions: 0, + liveness_max_sessions: 1, time_to_first_byte: Duration::from_secs(45), liveness_cidr_allowlist: vec![cidr::AnyIpCidr::V4("127.0.0.1/32".parse().unwrap())], + liveness_self_check_interval: None, auxiliary_tcp_listener_address: None, } } @@ -146,6 +149,7 @@ impl Default for ConnectionManagerConfig { pub struct ListenerInfo { bind_address: Multiaddr, aux_bind_address: Option, + liveness_watch: watch::Receiver, } impl ListenerInfo { @@ -159,6 +163,17 @@ impl ListenerInfo { pub fn auxiliary_bind_address(&self) -> Option<&Multiaddr> { self.aux_bind_address.as_ref() } + + /// Returns the current liveness status + pub fn liveness_status(&self) -> LivenessStatus { + *self.liveness_watch.borrow() + } + + /// Waits for liveness status to change from the last time the value was checked. + pub async fn liveness_status_changed(&mut self) -> Option { + self.liveness_watch.changed().await.ok()?; + Some(*self.liveness_watch.borrow()) + } } /// The actor responsible for connection management. @@ -211,8 +226,13 @@ where let aux_listener = config.auxiliary_tcp_listener_address.take().map(|addr| { info!(target: LOG_TARGET, "Starting auxiliary listener on {}", addr); + let aux_config = ConnectionManagerConfig { + // Disable liveness checks on the auxiliary listener + liveness_self_check_interval: None, + ..config.clone() + }; PeerListener::new( - config.clone(), + aux_config, addr, TcpTransport::new(), noise_config.clone(), @@ -325,21 +345,19 @@ where listener.set_supported_protocols(self.protocols.get_supported_protocols()); - let mut listener_info = ListenerInfo { - bind_address: Multiaddr::empty(), - aux_bind_address: None, - }; - match listener.listen().await { - Ok(addr) => { - listener_info.bind_address = addr; + let mut listener_info = match listener.listen().await { + Ok((bind_address, liveness_watch)) => ListenerInfo { + bind_address, + aux_bind_address: None, + liveness_watch, }, Err(err) => return Err(err), - } + }; if let Some(mut listener) = self.aux_listener.take() { listener.set_supported_protocols(self.protocols.get_supported_protocols()); - let addr = listener.listen().await?; - debug!(target: LOG_TARGET, "TCP listener bound to address {}", addr); + let (addr, _) = listener.listen().await?; + debug!(target: LOG_TARGET, "Aux TCP listener bound to address {}", addr); listener_info.aux_bind_address = Some(addr); } diff --git a/comms/core/src/connection_manager/mod.rs b/comms/core/src/connection_manager/mod.rs index 98c40dad64..6ae616669a 100644 --- a/comms/core/src/connection_manager/mod.rs +++ b/comms/core/src/connection_manager/mod.rs @@ -52,6 +52,8 @@ mod peer_connection; pub use peer_connection::{ConnectionId, NegotiatedSubstream, PeerConnection, PeerConnectionRequest}; mod liveness; +pub use liveness::LivenessStatus; + mod wire_mode; #[cfg(test)] diff --git a/comms/core/src/connection_manager/tests/listener_dialer.rs b/comms/core/src/connection_manager/tests/listener_dialer.rs index 5e0b26a92a..c2b71380bb 100644 --- a/comms/core/src/connection_manager/tests/listener_dialer.rs +++ b/comms/core/src/connection_manager/tests/listener_dialer.rs @@ -66,7 +66,7 @@ async fn listen() -> Result<(), Box> { shutdown.to_signal(), ); - let mut bind_addr = listener.listen().await?; + let (mut bind_addr, _) = listener.listen().await?; unpack_enum!(Protocol::Memory(port) = bind_addr.pop().unwrap()); assert!(port > 0); @@ -103,7 +103,7 @@ async fn smoke() { listener.set_supported_protocols(supported_protocols.clone()); // Get the listening address of the peer - let address = listener.listen().await.unwrap(); + let (address, _) = listener.listen().await.unwrap(); let node_identity2 = build_node_identity(PeerFeatures::COMMUNICATION_NODE); let noise_config2 = NoiseConfig::new(node_identity2.clone()); @@ -207,7 +207,7 @@ async fn banned() { listener.set_supported_protocols(supported_protocols.clone()); // Get the listener address of the peer - let address = listener.listen().await.unwrap(); + let (address, _) = listener.listen().await.unwrap(); let node_identity2 = build_node_identity(PeerFeatures::COMMUNICATION_NODE); // The listener has banned the dialer peer diff --git a/comms/core/src/connection_manager/wire_mode.rs b/comms/core/src/connection_manager/wire_mode.rs index 2ae7477988..e42ff3d9b7 100644 --- a/comms/core/src/connection_manager/wire_mode.rs +++ b/comms/core/src/connection_manager/wire_mode.rs @@ -22,13 +22,23 @@ use std::convert::TryFrom; -pub(crate) const LIVENESS_WIRE_MODE: u8 = 0xa6; // E +pub(crate) const LIVENESS_WIRE_MODE: u8 = 0xa6; +#[derive(Debug, Clone, Copy)] pub enum WireMode { Comms(u8), Liveness, } +impl WireMode { + pub fn as_byte(self) -> u8 { + match self { + WireMode::Comms(byte) => byte, + WireMode::Liveness => LIVENESS_WIRE_MODE, + } + } +} + impl TryFrom for WireMode { type Error = (); diff --git a/comms/core/src/protocol/identity.rs b/comms/core/src/protocol/identity.rs index 60103490b3..582d723894 100644 --- a/comms/core/src/protocol/identity.rs +++ b/comms/core/src/protocol/identity.rs @@ -204,9 +204,9 @@ mod test { async fn identity_exchange() { let transport = MemoryTransport; let addr = "/memory/0".parse().unwrap(); - let (mut listener, addr) = transport.listen(addr).await.unwrap(); + let (mut listener, addr) = transport.listen(&addr).await.unwrap(); - let (out_sock, in_sock) = future::join(transport.dial(addr), listener.next()).await; + let (out_sock, in_sock) = future::join(transport.dial(&addr), listener.next()).await; let mut out_sock = out_sock.unwrap(); let (mut in_sock, _) = in_sock.unwrap().unwrap(); @@ -251,9 +251,9 @@ mod test { async fn fail_cases() { let transport = MemoryTransport; let addr = "/memory/0".parse().unwrap(); - let (mut listener, addr) = transport.listen(addr).await.unwrap(); + let (mut listener, addr) = transport.listen(&addr).await.unwrap(); - let (out_sock, in_sock) = future::join(transport.dial(addr), listener.next()).await; + let (out_sock, in_sock) = future::join(transport.dial(&addr), listener.next()).await; let mut out_sock = out_sock.unwrap(); let (mut in_sock, _) = in_sock.unwrap().unwrap(); diff --git a/comms/core/src/protocol/rpc/client/mod.rs b/comms/core/src/protocol/rpc/client/mod.rs index 257905bf64..e30d3a70b0 100644 --- a/comms/core/src/protocol/rpc/client/mod.rs +++ b/comms/core/src/protocol/rpc/client/mod.rs @@ -613,7 +613,6 @@ where TSubstream: AsyncRead + AsyncWrite + Unpin + Send + StreamId debug!(target: LOG_TARGET, "Sending request: {}", req); - let mut timer = Some(Instant::now()); if reply.is_closed() { event!(Level::WARN, "Client request was cancelled before request was sent"); warn!( @@ -637,12 +636,14 @@ where TSubstream: AsyncRead + AsyncWrite + Unpin + Send + StreamId let latency = metrics::request_response_latency(&self.node_id, &self.protocol_id); let mut metrics_timer = Some(latency.start_timer()); + let timer = Instant::now(); if let Err(err) = self.send_request(req).await { warn!(target: LOG_TARGET, "{}", err); metrics::client_errors(&self.node_id, &self.protocol_id).inc(); let _result = response_tx.send(Err(err.into())).await; return Ok(()); } + let partial_latency = timer.elapsed(); loop { if self.shutdown_signal.is_triggered() { @@ -679,9 +680,9 @@ where TSubstream: AsyncRead + AsyncWrite + Unpin + Send + StreamId // let resp = match self.read_response(request_id).await { let resp = match resp_result { - Ok(resp) => { - if let Some(t) = timer.take() { - let _ = self.last_request_latency_tx.send(Some(t.elapsed())); + Ok((resp, time_to_first_msg)) => { + if let Some(t) = time_to_first_msg { + let _ = self.last_request_latency_tx.send(Some(partial_latency + t)); } event!(Level::TRACE, "Message received"); trace!( @@ -804,7 +805,10 @@ where TSubstream: AsyncRead + AsyncWrite + Unpin + Send + StreamId Ok(()) } - async fn read_response(&mut self, request_id: u16) -> Result { + async fn read_response( + &mut self, + request_id: u16, + ) -> Result<(proto::rpc::RpcResponse, Option), RpcError> { let stream_id = self.stream_id(); let protocol_name = self.protocol_name().to_string(); @@ -822,7 +826,8 @@ where TSubstream: AsyncRead + AsyncWrite + Unpin + Send + StreamId ); metrics::inbound_response_bytes(&self.node_id, &self.protocol_id) .observe(reader.bytes_read() as f64); - break resp; + let time_to_first_msg = reader.time_to_first_msg(); + break (resp, time_to_first_msg); }, Err(RpcError::ResponseIdDidNotMatchRequest { actual, expected }) if actual.wrapping_add(1) == request_id => @@ -888,6 +893,7 @@ struct RpcResponseReader<'a, TSubstream> { config: RpcClientConfig, request_id: u16, bytes_read: usize, + time_to_first_msg: Option, } impl<'a, TSubstream> RpcResponseReader<'a, TSubstream> @@ -899,6 +905,7 @@ where TSubstream: AsyncRead + AsyncWrite + Unpin config, request_id, bytes_read: 0, + time_to_first_msg: None, } } @@ -906,8 +913,14 @@ where TSubstream: AsyncRead + AsyncWrite + Unpin self.bytes_read } + pub fn time_to_first_msg(&self) -> Option { + self.time_to_first_msg + } + pub async fn read_response(&mut self) -> Result { + let timer = Instant::now(); let mut resp = self.next().await?; + self.time_to_first_msg = Some(timer.elapsed()); self.check_response(&resp)?; let mut chunk_count = 1; let mut last_chunk_flags = RpcMessageFlags::from_bits_truncate(u8::try_from(resp.flags).unwrap()); diff --git a/comms/core/src/protocol/rpc/server/error.rs b/comms/core/src/protocol/rpc/server/error.rs index ea3458b4e5..a829ff6035 100644 --- a/comms/core/src/protocol/rpc/server/error.rs +++ b/comms/core/src/protocol/rpc/server/error.rs @@ -60,8 +60,17 @@ pub enum RpcServerError { ServiceCallExceededDeadline, #[error("Stream read exceeded deadline")] ReadStreamExceededDeadline, - #[error("Early close error: {0}")] - EarlyCloseError(#[from] EarlyCloseError), + #[error("Early close: {0}")] + EarlyClose(#[from] EarlyCloseError), +} + +impl RpcServerError { + pub fn early_close_io(&self) -> Option<&io::Error> { + match self { + Self::EarlyClose(e) => e.io(), + _ => None, + } + } } impl From for RpcServerError { diff --git a/comms/core/src/protocol/rpc/server/mod.rs b/comms/core/src/protocol/rpc/server/mod.rs index 6690e31418..a05a40de4f 100644 --- a/comms/core/src/protocol/rpc/server/mod.rs +++ b/comms/core/src/protocol/rpc/server/mod.rs @@ -44,6 +44,7 @@ use std::{ convert::TryFrom, future::Future, io, + io::ErrorKind, pin::Pin, sync::Arc, task::Poll, @@ -353,7 +354,7 @@ where { Ok(_) => {}, Err(err @ RpcServerError::HandshakeError(_)) => { - debug!(target: LOG_TARGET, "{}", err); + debug!(target: LOG_TARGET, "Handshake error: {}", err); metrics::handshake_error_counter(&node_id, ¬ification.protocol).inc(); }, Err(err) => { @@ -530,7 +531,7 @@ where metrics::error_counter(&self.node_id, &self.protocol, &err).inc(); let level = match &err { RpcServerError::Io(e) => err_to_log_level(e), - RpcServerError::EarlyCloseError(e) => e.io().map(err_to_log_level).unwrap_or(log::Level::Error), + RpcServerError::EarlyClose(e) => e.io().map(err_to_log_level).unwrap_or(log::Level::Error), _ => log::Level::Error, }; log!( @@ -562,8 +563,10 @@ where err, ); } - error!( + let level = err.early_close_io().map(err_to_log_level).unwrap_or(log::Level::Error); + log!( target: LOG_TARGET, + level, "(peer: {}, protocol: {}) Failed to handle request: {}", self.node_id, self.protocol_name(), @@ -880,8 +883,13 @@ fn into_response(request_id: u32, result: Result) -> RpcRe } fn err_to_log_level(err: &io::Error) -> log::Level { + error!(target: LOG_TARGET, "KIND: {}", err.kind()); match err.kind() { - io::ErrorKind::BrokenPipe | io::ErrorKind::WriteZero => log::Level::Debug, + ErrorKind::ConnectionReset | + ErrorKind::ConnectionAborted | + ErrorKind::BrokenPipe | + ErrorKind::WriteZero | + ErrorKind::UnexpectedEof => log::Level::Debug, _ => log::Level::Error, } } diff --git a/comms/core/src/test_utils/transport.rs b/comms/core/src/test_utils/transport.rs index 7a770440fa..4dd4619c49 100644 --- a/comms/core/src/test_utils/transport.rs +++ b/comms/core/src/test_utils/transport.rs @@ -31,8 +31,8 @@ use crate::{ }; pub async fn build_connected_sockets() -> (Multiaddr, MemorySocket, MemorySocket) { - let (mut listener, addr) = MemoryTransport.listen("/memory/0".parse().unwrap()).await.unwrap(); - let (dial_sock, listen_sock) = future::join(MemoryTransport.dial(addr.clone()), listener.next()).await; + let (mut listener, addr) = MemoryTransport.listen(&"/memory/0".parse().unwrap()).await.unwrap(); + let (dial_sock, listen_sock) = future::join(MemoryTransport.dial(&addr), listener.next()).await; let (listen_sock, _) = listen_sock.unwrap().unwrap(); (addr, dial_sock.unwrap(), listen_sock) } diff --git a/comms/core/src/tor/control_client/client.rs b/comms/core/src/tor/control_client/client.rs index 5d0d0c4f1c..29663f7603 100644 --- a/comms/core/src/tor/control_client/client.rs +++ b/comms/core/src/tor/control_client/client.rs @@ -62,7 +62,7 @@ impl TorControlPortClient { ) -> Result { let mut tcp = TcpTransport::new(); tcp.set_nodelay(true); - let socket = tcp.dial(addr).await?; + let socket = tcp.dial(&addr).await?; Ok(Self::new(socket, event_tx)) } @@ -304,7 +304,7 @@ mod test { #[runtime::test] async fn connect() { let (mut listener, addr) = TcpTransport::default() - .listen("/ip4/127.0.0.1/tcp/0".parse().unwrap()) + .listen(&"/ip4/127.0.0.1/tcp/0".parse().unwrap()) .await .unwrap(); let (event_tx, _) = broadcast::channel(1); diff --git a/comms/core/src/transports/dns/mod.rs b/comms/core/src/transports/dns/mod.rs index d45f9f91ea..85b0d991bc 100644 --- a/comms/core/src/transports/dns/mod.rs +++ b/comms/core/src/transports/dns/mod.rs @@ -38,6 +38,7 @@ use crate::multiaddr::Multiaddr; pub type DnsResolverRef = Arc; +// TODO: use async_trait pub trait DnsResolver: Send + Sync + 'static { fn resolve(&self, addr: Multiaddr) -> BoxFuture<'static, Result>; } diff --git a/comms/core/src/transports/dns/tor.rs b/comms/core/src/transports/dns/tor.rs index 4663392f0c..aa9ce1c658 100644 --- a/comms/core/src/transports/dns/tor.rs +++ b/comms/core/src/transports/dns/tor.rs @@ -48,7 +48,7 @@ impl TorDnsResolver { } pub async fn connect(self) -> Result { - let mut client = connect_inner(self.socks_config.proxy_address) + let mut client = connect_inner(&self.socks_config.proxy_address) .await .map_err(DnsResolverError::ProxyConnectFailed)?; client.with_authentication(self.socks_config.authentication)?; @@ -56,7 +56,7 @@ impl TorDnsResolver { } } -async fn connect_inner(addr: Multiaddr) -> io::Result { +async fn connect_inner(addr: &Multiaddr) -> io::Result { let socket = SocksTransport::create_socks_tcp_transport().dial(addr).await?; Ok(Socks5Client::new(socket)) } diff --git a/comms/core/src/transports/memory.rs b/comms/core/src/transports/memory.rs index fc7c3552ca..4c3455966e 100644 --- a/comms/core/src/transports/memory.rs +++ b/comms/core/src/transports/memory.rs @@ -64,9 +64,9 @@ impl Transport for MemoryTransport { type Listener = Listener; type Output = MemorySocket; - async fn listen(&self, addr: Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { + async fn listen(&self, addr: &Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { // parse_addr is not used in the async block because of a rust ICE (internal compiler error) - let port = parse_addr(&addr)?; + let port = parse_addr(addr)?; let listener = MemoryListener::bind(port)?; let actual_port = listener.local_addr(); let mut actual_addr = Multiaddr::empty(); @@ -74,9 +74,9 @@ impl Transport for MemoryTransport { Ok((Listener { inner: listener }, actual_addr)) } - async fn dial(&self, addr: Multiaddr) -> Result { + async fn dial(&self, addr: &Multiaddr) -> Result { // parse_addr is not used in the async block because of a rust ICE (internal compiler error) - let port = parse_addr(&addr)?; + let port = parse_addr(addr)?; Ok(MemorySocket::connect(port)?) } } @@ -140,7 +140,7 @@ mod test { async fn simple_listen_and_dial() -> Result<(), ::std::io::Error> { let t = MemoryTransport::default(); - let (listener, addr) = t.listen("/memory/0".parse().unwrap()).await?; + let (listener, addr) = t.listen(&"/memory/0".parse().unwrap()).await?; let listener = async move { let (item, _listener) = listener.into_future().await; @@ -151,7 +151,7 @@ mod test { assert_eq!(buf, b"hello world"); }; - let mut outbound = t.dial(addr).await?; + let mut outbound = t.dial(&addr).await?; let dialer = async move { outbound.write_all(b"hello world").await.unwrap(); @@ -166,10 +166,10 @@ mod test { async fn unsupported_multiaddrs() { let t = MemoryTransport::default(); - let err = t.listen("/ip4/127.0.0.1/tcp/0".parse().unwrap()).await.unwrap_err(); + let err = t.listen(&"/ip4/127.0.0.1/tcp/0".parse().unwrap()).await.unwrap_err(); assert!(matches!(err.kind(), io::ErrorKind::InvalidInput)); - let err = t.dial("/ip4/127.0.0.1/tcp/22".parse().unwrap()).await.unwrap_err(); + let err = t.dial(&"/ip4/127.0.0.1/tcp/22".parse().unwrap()).await.unwrap_err(); assert!(matches!(err.kind(), io::ErrorKind::InvalidInput)); } diff --git a/comms/core/src/transports/mod.rs b/comms/core/src/transports/mod.rs index 90e3de56de..45050f540d 100644 --- a/comms/core/src/transports/mod.rs +++ b/comms/core/src/transports/mod.rs @@ -61,8 +61,8 @@ pub trait Transport { type Listener: Stream> + Send + Unpin; /// Listen for connections on the given multiaddr - async fn listen(&self, addr: Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error>; + async fn listen(&self, addr: &Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error>; /// Connect (dial) to the given multiaddr - async fn dial(&self, addr: Multiaddr) -> Result; + async fn dial(&self, addr: &Multiaddr) -> Result; } diff --git a/comms/core/src/transports/socks.rs b/comms/core/src/transports/socks.rs index 754eddb0ae..aed81823b3 100644 --- a/comms/core/src/transports/socks.rs +++ b/comms/core/src/transports/socks.rs @@ -80,19 +80,19 @@ impl SocksTransport { async fn socks_connect( tcp: TcpTransport, - socks_config: SocksConfig, - dest_addr: Multiaddr, + socks_config: &SocksConfig, + dest_addr: &Multiaddr, ) -> io::Result { // Create a new connection to the SOCKS proxy - let socks_conn = tcp.dial(socks_config.proxy_address).await?; + let socks_conn = tcp.dial(&socks_config.proxy_address).await?; let mut client = Socks5Client::new(socks_conn); client - .with_authentication(socks_config.authentication) + .with_authentication(socks_config.authentication.clone()) .map_err(|err| io::Error::new(io::ErrorKind::Other, err))?; client - .connect(&dest_addr) + .connect(dest_addr) .await .map(|(socket, _)| socket) .map_err(|err| io::Error::new(io::ErrorKind::Other, err)) @@ -105,18 +105,18 @@ impl Transport for SocksTransport { type Listener = ::Listener; type Output = ::Output; - async fn listen(&self, addr: Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { + async fn listen(&self, addr: &Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { self.tcp_transport.listen(addr).await } - async fn dial(&self, addr: Multiaddr) -> Result { + async fn dial(&self, addr: &Multiaddr) -> Result { // Bypass the SOCKS proxy and connect to the address directly - if self.socks_config.proxy_bypass_predicate.check(&addr) { + if self.socks_config.proxy_bypass_predicate.check(addr) { debug!(target: LOG_TARGET, "SOCKS proxy bypassed for '{}'. Using TCP.", addr); return self.tcp_transport.dial(addr).await; } - let socket = Self::socks_connect(self.tcp_transport.clone(), self.socks_config.clone(), addr).await?; + let socket = Self::socks_connect(self.tcp_transport.clone(), &self.socks_config, addr).await?; Ok(socket) } } diff --git a/comms/core/src/transports/tcp.rs b/comms/core/src/transports/tcp.rs index aab9fd0f07..c5470a29b7 100644 --- a/comms/core/src/transports/tcp.rs +++ b/comms/core/src/transports/tcp.rs @@ -125,10 +125,10 @@ impl Transport for TcpTransport { type Listener = TcpInbound; type Output = TcpStream; - async fn listen(&self, addr: Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { + async fn listen(&self, addr: &Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { let socket_addr = self .dns_resolver - .resolve(addr) + .resolve(addr.clone()) .await .map_err(|err| io::Error::new(io::ErrorKind::Other, format!("Failed to resolve address: {}", err)))?; let listener = TcpListener::bind(&socket_addr).await?; @@ -136,10 +136,10 @@ impl Transport for TcpTransport { Ok((TcpInbound::new(self.clone(), listener), local_addr)) } - async fn dial(&self, addr: Multiaddr) -> Result { + async fn dial(&self, addr: &Multiaddr) -> Result { let socket_addr = self .dns_resolver - .resolve(addr) + .resolve(addr.clone()) .await .map_err(|err| io::Error::new(io::ErrorKind::Other, format!("Address resolution failed: {}", err)))?; diff --git a/comms/core/src/transports/tcp_with_tor.rs b/comms/core/src/transports/tcp_with_tor.rs index 17f2c439bf..f6cea6e991 100644 --- a/comms/core/src/transports/tcp_with_tor.rs +++ b/comms/core/src/transports/tcp_with_tor.rs @@ -67,11 +67,11 @@ impl Transport for TcpWithTorTransport { type Listener = ::Listener; type Output = TcpStream; - async fn listen(&self, addr: Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { + async fn listen(&self, addr: &Multiaddr) -> Result<(Self::Listener, Multiaddr), Self::Error> { self.tcp_transport.listen(addr).await } - async fn dial(&self, addr: Multiaddr) -> Result { + async fn dial(&self, addr: &Multiaddr) -> Result { if addr.is_empty() { return Err(io::Error::new( io::ErrorKind::InvalidInput, @@ -79,7 +79,7 @@ impl Transport for TcpWithTorTransport { )); } - if is_onion_address(&addr) { + if is_onion_address(addr) { match self.socks_transport { Some(ref transport) => { let socket = transport.dial(addr).await?; diff --git a/comms/dht/Cargo.toml b/comms/dht/Cargo.toml index b644c51565..08bd5c0d88 100644 --- a/comms/dht/Cargo.toml +++ b/comms/dht/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "tari_comms_dht" -version = "0.38.5" +version = "0.38.7" authors = ["The Tari Development Community"] description = "Tari comms DHT module" repository = "https://github.com/tari-project/tari" diff --git a/comms/dht/src/dht.rs b/comms/dht/src/dht.rs index e3acdae98a..3fef7d43a8 100644 --- a/comms/dht/src/dht.rs +++ b/comms/dht/src/dht.rs @@ -31,6 +31,7 @@ use tari_comms::{ pipeline::PipelineError, }; use tari_shutdown::ShutdownSignal; +use tari_utilities::epoch_time::EpochTime; use thiserror::Error; use tokio::sync::{broadcast, mpsc}; use tower::{layer::Layer, Service, ServiceBuilder}; @@ -298,6 +299,7 @@ impl Dht { .layer(MetricsLayer::new(self.metrics_collector.clone())) .layer(inbound::DeserializeLayer::new(self.peer_manager.clone())) .layer(filter::FilterLayer::new(self.unsupported_saf_messages_filter())) + .layer(filter::FilterLayer::new(discard_expired_messages)) .layer(inbound::DecryptionLayer::new( self.config.clone(), self.node_identity.clone(), @@ -432,6 +434,20 @@ fn filter_messages_to_rebroadcast(msg: &DecryptedDhtMessage) -> bool { } } +/// Check message expiry and immediately discard if expired +fn discard_expired_messages(msg: &DhtInboundMessage) -> bool { + if let Some(expires) = msg.dht_header.expires { + if expires < EpochTime::now() { + debug!( + target: LOG_TARGET, + "[discard_expired_messages] Discarding expired message {}", msg + ); + return false; + } + } + true +} + #[cfg(test)] mod test { use std::{sync::Arc, time::Duration}; diff --git a/comms/dht/src/envelope.rs b/comms/dht/src/envelope.rs index 3f4f2ef06e..6ac881cb80 100644 --- a/comms/dht/src/envelope.rs +++ b/comms/dht/src/envelope.rs @@ -43,7 +43,7 @@ use crate::version::DhtProtocolVersion; pub(crate) fn datetime_to_timestamp(datetime: DateTime) -> Timestamp { Timestamp { seconds: datetime.timestamp(), - nanos: datetime.timestamp_subsec_nanos().try_into().unwrap_or(std::i32::MAX), + nanos: datetime.timestamp_subsec_nanos().try_into().unwrap_or(i32::MAX), } } diff --git a/comms/dht/src/store_forward/database/stored_message.rs b/comms/dht/src/store_forward/database/stored_message.rs index b8d095d901..1913b5be02 100644 --- a/comms/dht/src/store_forward/database/stored_message.rs +++ b/comms/dht/src/store_forward/database/stored_message.rs @@ -20,8 +20,6 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use std::convert::TryInto; - use chrono::NaiveDateTime; use tari_comms::message::MessageExt; use tari_utilities::{hex, hex::Hex}; @@ -50,7 +48,7 @@ pub struct NewStoredMessage { } impl NewStoredMessage { - pub fn try_construct(message: DecryptedDhtMessage, priority: StoredMessagePriority) -> Option { + pub fn new(message: DecryptedDhtMessage, priority: StoredMessagePriority) -> Self { let DecryptedDhtMessage { authenticated_origin, decryption_result, @@ -64,8 +62,8 @@ impl NewStoredMessage { }; let body_hash = hex::to_hex(&dedup::create_message_hash(&dht_header.message_signature, &body)); - Some(Self { - version: dht_header.version.as_major().try_into().ok()?, + Self { + version: dht_header.version.as_major() as i32, origin_pubkey: authenticated_origin.as_ref().map(|pk| pk.to_hex()), message_type: dht_header.message_type as i32, destination_pubkey: dht_header.destination.public_key().map(|pk| pk.to_hex()), @@ -81,7 +79,7 @@ impl NewStoredMessage { }, body_hash, body, - }) + } } } diff --git a/comms/dht/src/store_forward/error.rs b/comms/dht/src/store_forward/error.rs index 4a71b410eb..85fd5678c2 100644 --- a/comms/dht/src/store_forward/error.rs +++ b/comms/dht/src/store_forward/error.rs @@ -27,7 +27,7 @@ use tari_comms::{ message::MessageError, peer_manager::{NodeId, PeerManagerError}, }; -use tari_utilities::byte_array::ByteArrayError; +use tari_utilities::{byte_array::ByteArrayError, epoch_time::EpochTime}; use thiserror::Error; use crate::{ @@ -81,10 +81,10 @@ pub enum StoreAndForwardError { RequesterChannelClosed, #[error("The request was cancelled by the store and forward service")] RequestCancelled, - #[error("The message was not valid for store and forward")] - InvalidStoreMessage, - #[error("The envelope version is invalid")] - InvalidEnvelopeVersion, + #[error("The {field} field was not valid, discarding SAF response: {details}")] + InvalidSafResponseMessage { field: &'static str, details: String }, + #[error("The message has expired, not storing message in SAF db (expiry: {expired}, now: {now})")] + NotStoringExpiredMessage { expired: EpochTime, now: EpochTime }, #[error("MalformedNodeId: {0}")] MalformedNodeId(#[from] ByteArrayError), #[error("DHT message type should not have been forwarded")] diff --git a/comms/dht/src/store_forward/message.rs b/comms/dht/src/store_forward/message.rs index f753b9941b..f74af32c61 100644 --- a/comms/dht/src/store_forward/message.rs +++ b/comms/dht/src/store_forward/message.rs @@ -20,7 +20,7 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use std::convert::{TryFrom, TryInto}; +use std::convert::TryFrom; use chrono::{DateTime, Utc}; use prost::Message; @@ -76,10 +76,7 @@ impl TryFrom for StoredMessage { let dht_header = DhtHeader::decode(message.header.as_slice())?; Ok(Self { stored_at: Some(datetime_to_timestamp(DateTime::from_utc(message.stored_at, Utc))), - version: message - .version - .try_into() - .map_err(|_| StoreAndForwardError::InvalidEnvelopeVersion)?, + version: message.version as u32, body: message.body, dht_header: Some(dht_header), }) diff --git a/comms/dht/src/store_forward/saf_handler/task.rs b/comms/dht/src/store_forward/saf_handler/task.rs index 7f5390d382..4bce651e68 100644 --- a/comms/dht/src/store_forward/saf_handler/task.rs +++ b/comms/dht/src/store_forward/saf_handler/task.rs @@ -36,7 +36,7 @@ use tari_comms::{ types::CommsPublicKey, BytesMut, }; -use tari_utilities::{convert::try_convert_all, ByteArray}; +use tari_utilities::ByteArray; use tokio::sync::mpsc; use tower::{Service, ServiceExt}; @@ -216,7 +216,7 @@ where S: Service let messages = self.saf_requester.fetch_messages(query.clone()).await?; let stored_messages = StoredMessagesResponse { - messages: try_convert_all(messages)?, + messages: messages.into_iter().map(TryInto::try_into).collect::>()?, request_id: retrieve_msgs.request_id, response_type: resp_type as i32, }; @@ -430,8 +430,13 @@ where S: Service .stored_at .map(|t| { Result::<_, StoreAndForwardError>::Ok(DateTime::from_utc( - NaiveDateTime::from_timestamp_opt(t.seconds, t.nanos.try_into().unwrap_or(u32::MAX)) - .ok_or(StoreAndForwardError::InvalidStoreMessage)?, + NaiveDateTime::from_timestamp_opt(t.seconds, 0).ok_or_else(|| { + StoreAndForwardError::InvalidSafResponseMessage { + field: "stored_at", + details: "number of seconds provided represents more days than can fit in a u32" + .to_string(), + } + })?, Utc, )) }) @@ -618,7 +623,7 @@ where S: Service mod test { use std::time::Duration; - use chrono::Utc; + use chrono::{Timelike, Utc}; use tari_comms::{message::MessageExt, runtime, wrap_in_envelope_body}; use tari_test_utils::collect_recv; use tari_utilities::{hex, hex::Hex}; @@ -932,7 +937,7 @@ mod test { .unwrap() .unwrap(); - assert_eq!(last_saf_received, msg2_time); + assert_eq!(last_saf_received.second(), msg2_time.second()); } #[runtime::test] diff --git a/comms/dht/src/store_forward/store.rs b/comms/dht/src/store_forward/store.rs index c0d2b8d224..70690bde94 100644 --- a/comms/dht/src/store_forward/store.rs +++ b/comms/dht/src/store_forward/store.rs @@ -437,13 +437,13 @@ where S: Service + Se ); if let Some(expires) = message.dht_header.expires { - if expires < EpochTime::now() { - return SafResult::Err(StoreAndForwardError::InvalidStoreMessage); + let now = EpochTime::now(); + if expires < now { + return Err(StoreAndForwardError::NotStoringExpiredMessage { expired: expires, now }); } } - let stored_message = - NewStoredMessage::try_construct(message, priority).ok_or(StoreAndForwardError::InvalidStoreMessage)?; + let stored_message = NewStoredMessage::new(message, priority); self.saf_requester.insert_message(stored_message).await } } diff --git a/comms/rpc_macros/Cargo.toml b/comms/rpc_macros/Cargo.toml index 81e33db8ea..f9aac328f1 100644 --- a/comms/rpc_macros/Cargo.toml +++ b/comms/rpc_macros/Cargo.toml @@ -6,7 +6,7 @@ repository = "https://github.com/tari-project/tari" homepage = "https://tari.com" readme = "README.md" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [lib] diff --git a/infrastructure/derive/Cargo.toml b/infrastructure/derive/Cargo.toml index dbb5fe630c..f7c686eed4 100644 --- a/infrastructure/derive/Cargo.toml +++ b/infrastructure/derive/Cargo.toml @@ -6,7 +6,7 @@ repository = "https://github.com/tari-project/tari" homepage = "https://tari.com" readme = "README.md" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [lib] diff --git a/infrastructure/shutdown/Cargo.toml b/infrastructure/shutdown/Cargo.toml index 9070aab5e4..cd2c41d80e 100644 --- a/infrastructure/shutdown/Cargo.toml +++ b/infrastructure/shutdown/Cargo.toml @@ -6,7 +6,7 @@ repository = "https://github.com/tari-project/tari" homepage = "https://tari.com" readme = "README.md" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html diff --git a/infrastructure/storage/Cargo.toml b/infrastructure/storage/Cargo.toml index 181f62b20e..9a966cf497 100644 --- a/infrastructure/storage/Cargo.toml +++ b/infrastructure/storage/Cargo.toml @@ -6,7 +6,7 @@ repository = "https://github.com/tari-project/tari" homepage = "https://tari.com" readme = "README.md" license = "BSD-3-Clause" -version = "0.38.5" +version = "0.38.7" edition = "2018" [dependencies] diff --git a/infrastructure/storage/tests/lmdb.rs b/infrastructure/storage/tests/lmdb.rs index 38441e39e0..45740521c7 100644 --- a/infrastructure/storage/tests/lmdb.rs +++ b/infrastructure/storage/tests/lmdb.rs @@ -118,7 +118,7 @@ fn insert_all_users(name: &str) -> (Vec, LMDBDatabase) { } #[test] -fn single_thread() { +fn test_single_thread() { { let users = load_users(); let env = init("single_thread").unwrap(); @@ -136,7 +136,7 @@ fn single_thread() { } #[test] -fn multi_thread() { +fn test_multi_thread() { { let users_arc = Arc::new(load_users()); let env = init("multi_thread").unwrap(); @@ -167,7 +167,7 @@ fn multi_thread() { } #[test] -fn transactions() { +fn test_transactions() { { let (users, db) = insert_all_users("transactions"); // Test the `exists` and value retrieval functions @@ -186,7 +186,7 @@ fn transactions() { /// Simultaneous writes in different threads #[test] #[allow(clippy::same_item_push)] -fn multi_thread_writes() { +fn test_multi_thread_writes() { { let env = init("multi-thread-writes").unwrap(); let mut threads = Vec::new(); @@ -220,7 +220,7 @@ fn multi_thread_writes() { /// Multiple write transactions in a single thread #[test] -fn multi_writes() { +fn test_multi_writes() { { let env = init("multi-writes").unwrap(); for i in 0..2 { @@ -241,7 +241,7 @@ fn multi_writes() { } #[test] -fn pair_iterator() { +fn test_pair_iterator() { { let (users, db) = insert_all_users("pair_iterator"); let res = db.for_each::(|pair| { @@ -256,7 +256,7 @@ fn pair_iterator() { } #[test] -fn exists_and_delete() { +fn test_exists_and_delete() { { let (_, db) = insert_all_users("delete"); assert!(db.contains_key(&525u64).unwrap()); @@ -267,7 +267,7 @@ fn exists_and_delete() { } #[test] -fn lmdb_resize_on_create() { +fn test_lmdb_resize_on_create() { let db_env_name = "resize"; { let path = get_path(db_env_name); diff --git a/infrastructure/tari_script/src/lib.rs b/infrastructure/tari_script/src/lib.rs index e796c55a4d..81ef3d5e7f 100644 --- a/infrastructure/tari_script/src/lib.rs +++ b/infrastructure/tari_script/src/lib.rs @@ -24,7 +24,7 @@ mod serde; mod stack; pub use error::ScriptError; -pub use op_codes::{slice_to_boxed_hash, slice_to_hash, HashValue, Opcode}; +pub use op_codes::{slice_to_boxed_hash, slice_to_hash, HashValue, Message, Opcode, ScalarValue}; pub use script::TariScript; pub use script_commitment::{ScriptCommitment, ScriptCommitmentError, ScriptCommitmentFactory}; pub use script_context::ScriptContext; diff --git a/infrastructure/tari_script/src/op_codes.rs b/infrastructure/tari_script/src/op_codes.rs index 50350e0dbb..2eab0a48bd 100644 --- a/infrastructure/tari_script/src/op_codes.rs +++ b/infrastructure/tari_script/src/op_codes.rs @@ -118,6 +118,7 @@ pub const OP_HASH_BLAKE256: u8 = 0xb0; pub const OP_HASH_SHA256: u8 = 0xb1; pub const OP_HASH_SHA3: u8 = 0xb2; pub const OP_TO_RISTRETTO_POINT: u8 = 0xb3; +pub const OP_CHECK_MULTI_SIG_VERIFY_AGGREGATE_PUB_KEY: u8 = 0xb4; // Opcode constants: Miscellaneous pub const OP_RETURN: u8 = 0x60; @@ -234,6 +235,9 @@ pub enum Opcode { /// Identical to CheckMultiSig, except that nothing is pushed to the stack if the m signatures are valid, and the /// operation fails with VERIFY_FAILED if any of the signatures are invalid. CheckMultiSigVerify(u8, u8, Vec, Box), + /// Pop m signatures from the stack. If m signatures out of the provided n public keys sign the 32-byte message, + /// push the aggregate of the public keys to the stack, otherwise fails with VERIFY_FAILED. + CheckMultiSigVerifyAggregatePubKey(u8, u8, Vec, Box), /// Pops the top element which must be a valid 32-byte scalar or hash and calculates the corresponding Ristretto /// point, and pushes the result to the stack. Fails with EMPTY_STACK if the stack is empty. ToRistrettoPoint, @@ -355,6 +359,10 @@ impl Opcode { let (m, n, keys, msg, end) = Opcode::read_multisig_args(bytes)?; Ok((CheckMultiSigVerify(m, n, keys, msg), &bytes[end..])) }, + OP_CHECK_MULTI_SIG_VERIFY_AGGREGATE_PUB_KEY => { + let (m, n, keys, msg, end) = Opcode::read_multisig_args(bytes)?; + Ok((CheckMultiSigVerifyAggregatePubKey(m, n, keys, msg), &bytes[end..])) + }, OP_TO_RISTRETTO_POINT => Ok((ToRistrettoPoint, &bytes[1..])), OP_RETURN => Ok((Return, &bytes[1..])), OP_IF_THEN => Ok((IfThen, &bytes[1..])), @@ -464,6 +472,13 @@ impl Opcode { } array.extend_from_slice(msg.deref()); }, + CheckMultiSigVerifyAggregatePubKey(m, n, public_keys, msg) => { + array.extend_from_slice(&[OP_CHECK_MULTI_SIG_VERIFY_AGGREGATE_PUB_KEY, *m, *n]); + for public_key in public_keys { + array.extend(public_key.as_bytes()); + } + array.extend_from_slice(msg.deref()); + }, ToRistrettoPoint => array.push(OP_TO_RISTRETTO_POINT), Return => array.push(OP_RETURN), IfThen => array.push(OP_IF_THEN), @@ -530,6 +545,17 @@ impl fmt::Display for Opcode { (*msg).to_hex() ) }, + CheckMultiSigVerifyAggregatePubKey(m, n, public_keys, msg) => { + let keys: Vec = public_keys.iter().map(|p| p.to_hex()).collect(); + write!( + fmt, + "CheckMultiSigVerifyAggregatePubKey({}, {}, [{}], {})", + *m, + *n, + keys.join(", "), + (*msg).to_hex() + ) + }, ToRistrettoPoint => write!(fmt, "ToRistrettoPoint"), Return => write!(fmt, "Return"), IfThen => write!(fmt, "IfThen"), @@ -766,12 +792,20 @@ mod test { 6c9cb4d3e57351462122310fa22c90b1e6dfb528d64615363d1261a75da3e401)", ); test_checkmultisig( - &Opcode::CheckMultiSigVerify(1, 2, keys, Box::new(*msg)), + &Opcode::CheckMultiSigVerify(1, 2, keys.clone(), Box::new(*msg)), OP_CHECK_MULTI_SIG_VERIFY, "CheckMultiSigVerify(1, 2, [9c8bc5f90d221191748e8dd7686f09e1114b4bada4c367ed58ae199c51eb100b, \ 56e9f018b138ba843521b3243a29d81730c3a4c25108b108b1ca47c2132db569], \ 6c9cb4d3e57351462122310fa22c90b1e6dfb528d64615363d1261a75da3e401)", ); + test_checkmultisig( + &Opcode::CheckMultiSigVerifyAggregatePubKey(1, 2, keys, Box::new(*msg)), + OP_CHECK_MULTI_SIG_VERIFY_AGGREGATE_PUB_KEY, + "CheckMultiSigVerifyAggregatePubKey(1, 2, \ + [9c8bc5f90d221191748e8dd7686f09e1114b4bada4c367ed58ae199c51eb100b, \ + 56e9f018b138ba843521b3243a29d81730c3a4c25108b108b1ca47c2132db569], \ + 6c9cb4d3e57351462122310fa22c90b1e6dfb528d64615363d1261a75da3e401)", + ); } #[test] diff --git a/infrastructure/tari_script/src/script.rs b/infrastructure/tari_script/src/script.rs index b91fce8480..df7d91b945 100644 --- a/infrastructure/tari_script/src/script.rs +++ b/infrastructure/tari_script/src/script.rs @@ -119,7 +119,7 @@ impl TariScript { } } - pub fn as_bytes(&self) -> Vec { + pub fn to_bytes(&self) -> Vec { self.script.iter().fold(Vec::new(), |mut bytes, op| { op.to_bytes(&mut bytes); bytes @@ -137,7 +137,7 @@ impl TariScript { if D::output_size() < 32 { return Err(ScriptError::InvalidDigest); } - let h = D::digest(&self.as_bytes()); + let h = D::digest(&self.to_bytes()); Ok(slice_to_hash(&h.as_slice()[..32])) } @@ -178,7 +178,7 @@ impl TariScript { pub fn script_message(&self, pub_key: &RistrettoPublicKey) -> Result { let b = Blake256::new() .chain(pub_key.as_bytes()) - .chain(&self.as_bytes()) + .chain(&self.to_bytes()) .finalize(); RistrettoSecretKey::from_bytes(b.as_slice()).map_err(|_| ScriptError::InvalidSignature) } @@ -248,19 +248,26 @@ impl TariScript { } }, CheckMultiSig(m, n, public_keys, msg) => { - if self.check_multisig(stack, *m, *n, public_keys, *msg.deref())? { + if self.check_multisig(stack, *m, *n, public_keys, *msg.deref())?.is_some() { stack.push(Number(1)) } else { stack.push(Number(0)) } }, CheckMultiSigVerify(m, n, public_keys, msg) => { - if self.check_multisig(stack, *m, *n, public_keys, *msg.deref())? { + if self.check_multisig(stack, *m, *n, public_keys, *msg.deref())?.is_some() { Ok(()) } else { Err(ScriptError::VerifyFailed) } }, + CheckMultiSigVerifyAggregatePubKey(m, n, public_keys, msg) => { + if let Some(agg_pub_key) = self.check_multisig(stack, *m, *n, public_keys, *msg.deref())? { + stack.push(PublicKey(agg_pub_key)) + } else { + Err(ScriptError::VerifyFailed) + } + }, ToRistrettoPoint => self.handle_to_ristretto_point(stack), Return => Err(ScriptError::Return), IfThen => TariScript::handle_if_then(stack, state), @@ -505,9 +512,9 @@ impl TariScript { n: u8, public_keys: &[RistrettoPublicKey], message: Message, - ) -> Result { - if m == 0 || n == 0 || m > n || n > MAX_MULTISIG_LIMIT { - return Err(ScriptError::InvalidData); + ) -> Result, ScriptError> { + if m == 0 || n == 0 || m > n || n > MAX_MULTISIG_LIMIT || public_keys.len() != n as usize { + return Err(ScriptError::ValueExceedsBounds); } // pop m sigs let m = m as usize; @@ -524,20 +531,25 @@ impl TariScript { #[allow(clippy::mutable_key_type)] let mut sig_set = HashSet::new(); + let mut agg_pub_key = RistrettoPublicKey::default(); for s in &signatures { for (i, pk) in public_keys.iter().enumerate() { if !sig_set.contains(s) && !key_signed[i] && s.verify_challenge(pk, &message) { key_signed[i] = true; sig_set.insert(s); + agg_pub_key = agg_pub_key + pk; break; } } if !sig_set.contains(s) { - return Ok(false); + return Ok(None); } } - - Ok(sig_set.len() == m) + if sig_set.len() == m { + Ok(Some(agg_pub_key)) + } else { + Ok(None) + } } fn handle_to_ristretto_point(&self, stack: &mut ExecutionStack) -> Result<(), ScriptError> { @@ -562,7 +574,7 @@ impl Hex for TariScript { } fn to_hex(&self) -> String { - to_hex(&self.as_bytes()) + to_hex(&self.to_bytes()) } } @@ -625,6 +637,7 @@ mod test { inputs, op_codes::{slice_to_boxed_hash, slice_to_boxed_message, HashValue, Message}, ExecutionStack, + Opcode::CheckMultiSigVerifyAggregatePubKey, ScriptContext, StackItem, StackItem::{Commitment, Hash, Number}, @@ -948,7 +961,7 @@ mod test { #[test] fn serialisation() { let script = script!(Add Sub Add); - assert_eq!(&script.as_bytes(), &[0x93, 0x94, 0x93]); + assert_eq!(&script.to_bytes(), &[0x93, 0x94, 0x93]); assert_eq!(TariScript::from_bytes(&[0x93, 0x94, 0x93]).unwrap(), script); assert_eq!(script.to_hex(), "939493"); assert_eq!(TariScript::from_hex("939493").unwrap(), script); @@ -1145,21 +1158,21 @@ mod test { let script = TariScript::new(ops); let inputs = inputs!(s_alice.clone()); let err = script.execute(&inputs).unwrap_err(); - assert_eq!(err, ScriptError::InvalidData); + assert_eq!(err, ScriptError::ValueExceedsBounds); let keys = vec![p_alice.clone(), p_bob.clone()]; let ops = vec![CheckMultiSig(1, 0, keys, msg.clone())]; let script = TariScript::new(ops); let inputs = inputs!(s_alice.clone()); let err = script.execute(&inputs).unwrap_err(); - assert_eq!(err, ScriptError::InvalidData); + assert_eq!(err, ScriptError::ValueExceedsBounds); let keys = vec![p_alice, p_bob]; let ops = vec![CheckMultiSig(2, 1, keys, msg)]; let script = TariScript::new(ops); let inputs = inputs!(s_alice); let err = script.execute(&inputs).unwrap_err(); - assert_eq!(err, ScriptError::InvalidData); + assert_eq!(err, ScriptError::ValueExceedsBounds); // max n is 32 let (msg, data) = multisig_data(33); @@ -1169,7 +1182,7 @@ mod test { let items = sigs.map(StackItem::Signature).collect(); let inputs = ExecutionStack::new(items); let err = script.execute(&inputs).unwrap_err(); - assert_eq!(err, ScriptError::InvalidData); + assert_eq!(err, ScriptError::ValueExceedsBounds); // 3 of 4 let (msg, data) = multisig_data(4); @@ -1258,7 +1271,7 @@ mod test { // 1 of 3 let keys = vec![p_alice.clone(), p_bob.clone(), p_carol.clone()]; - let ops = vec![CheckMultiSigVerify(1, 2, keys, msg.clone())]; + let ops = vec![CheckMultiSigVerify(1, 3, keys, msg.clone())]; let script = TariScript::new(ops); let inputs = inputs!(Number(1), s_alice.clone()); @@ -1292,6 +1305,31 @@ mod test { let err = script.execute(&inputs).unwrap_err(); assert_eq!(err, ScriptError::VerifyFailed); + // 2 of 3 (returning the aggregate public key of the signatories) + let keys = vec![p_alice.clone(), p_bob.clone(), p_carol.clone()]; + let ops = vec![CheckMultiSigVerifyAggregatePubKey(2, 3, keys, msg.clone())]; + let script = TariScript::new(ops); + + let inputs = inputs!(s_alice.clone(), s_bob.clone()); + let agg_pub_key = script.execute(&inputs).unwrap(); + assert_eq!(agg_pub_key, StackItem::PublicKey(p_alice.clone() + p_bob.clone())); + + let inputs = inputs!(s_alice.clone(), s_carol.clone()); + let agg_pub_key = script.execute(&inputs).unwrap(); + assert_eq!(agg_pub_key, StackItem::PublicKey(p_alice.clone() + p_carol.clone())); + + let inputs = inputs!(s_bob.clone(), s_carol.clone()); + let agg_pub_key = script.execute(&inputs).unwrap(); + assert_eq!(agg_pub_key, StackItem::PublicKey(p_bob.clone() + p_carol.clone())); + + let inputs = inputs!(s_alice.clone(), s_carol.clone(), s_bob.clone()); + let err = script.execute(&inputs).unwrap_err(); + assert_eq!(err, ScriptError::NonUnitLengthStack); + + let inputs = inputs!(p_bob.clone()); + let err = script.execute(&inputs).unwrap_err(); + assert_eq!(err, ScriptError::StackUnderflow); + // 3 of 3 let keys = vec![p_alice.clone(), p_bob.clone(), p_carol]; let ops = vec![CheckMultiSigVerify(3, 3, keys, msg.clone())]; @@ -1313,21 +1351,21 @@ mod test { let script = TariScript::new(ops); let inputs = inputs!(s_alice.clone()); let err = script.execute(&inputs).unwrap_err(); - assert_eq!(err, ScriptError::InvalidData); + assert_eq!(err, ScriptError::ValueExceedsBounds); let keys = vec![p_alice.clone(), p_bob.clone()]; let ops = vec![CheckMultiSigVerify(1, 0, keys, msg.clone())]; let script = TariScript::new(ops); let inputs = inputs!(s_alice.clone()); let err = script.execute(&inputs).unwrap_err(); - assert_eq!(err, ScriptError::InvalidData); + assert_eq!(err, ScriptError::ValueExceedsBounds); let keys = vec![p_alice, p_bob]; let ops = vec![CheckMultiSigVerify(2, 1, keys, msg)]; let script = TariScript::new(ops); let inputs = inputs!(s_alice); let err = script.execute(&inputs).unwrap_err(); - assert_eq!(err, ScriptError::InvalidData); + assert_eq!(err, ScriptError::ValueExceedsBounds); // 3 of 4 let (msg, data) = multisig_data(4); diff --git a/infrastructure/tari_script/src/serde.rs b/infrastructure/tari_script/src/serde.rs index 658eef02a9..b9379dae64 100644 --- a/infrastructure/tari_script/src/serde.rs +++ b/infrastructure/tari_script/src/serde.rs @@ -26,12 +26,12 @@ use serde::{ }; use tari_utilities::hex::{from_hex, Hex}; -use crate::TariScript; +use crate::{ExecutionStack, TariScript}; impl Serialize for TariScript { fn serialize(&self, ser: S) -> Result where S: Serializer { - let script_bin = self.as_bytes(); + let script_bin = self.to_bytes(); if ser.is_human_readable() { ser.serialize_str(&script_bin.to_hex()) } else { @@ -40,44 +40,99 @@ impl Serialize for TariScript { } } -struct ScriptVisitor; +impl<'de> Deserialize<'de> for TariScript { + fn deserialize(de: D) -> Result + where D: Deserializer<'de> { + struct ScriptVisitor; -impl<'de> Visitor<'de> for ScriptVisitor { - type Value = TariScript; + impl<'de> Visitor<'de> for ScriptVisitor { + type Value = TariScript; - fn expecting(&self, fmt: &mut fmt::Formatter) -> fmt::Result { - fmt.write_str("Expecting a binary array or hex string") - } + fn expecting(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + fmt.write_str("Expecting a binary array or hex string") + } - fn visit_str(self, v: &str) -> Result - where E: Error { - let bytes = from_hex(v).map_err(|e| E::custom(e.to_string()))?; - self.visit_bytes(&bytes) - } + fn visit_str(self, v: &str) -> Result + where E: Error { + let bytes = from_hex(v).map_err(|e| E::custom(e.to_string()))?; + self.visit_bytes(&bytes) + } - fn visit_string(self, v: String) -> Result - where E: Error { - self.visit_str(&v) - } + fn visit_string(self, v: String) -> Result + where E: Error { + self.visit_str(&v) + } + + fn visit_bytes(self, v: &[u8]) -> Result + where E: Error { + TariScript::from_bytes(v).map_err(|e| E::custom(e.to_string())) + } + + fn visit_borrowed_bytes(self, v: &'de [u8]) -> Result + where E: Error { + self.visit_bytes(v) + } + } - fn visit_bytes(self, v: &[u8]) -> Result - where E: Error { - TariScript::from_bytes(v).map_err(|e| E::custom(e.to_string())) + if de.is_human_readable() { + de.deserialize_string(ScriptVisitor) + } else { + de.deserialize_bytes(ScriptVisitor) + } } +} - fn visit_borrowed_bytes(self, v: &'de [u8]) -> Result - where E: Error { - self.visit_bytes(v) +// -------------------------------- ExecutionStack -------------------------------- // +impl Serialize for ExecutionStack { + fn serialize(&self, ser: S) -> Result + where S: Serializer { + let stack_bin = self.to_bytes(); + if ser.is_human_readable() { + ser.serialize_str(&stack_bin.to_hex()) + } else { + ser.serialize_bytes(&stack_bin) + } } } -impl<'de> Deserialize<'de> for TariScript { +impl<'de> Deserialize<'de> for ExecutionStack { fn deserialize(de: D) -> Result where D: Deserializer<'de> { + struct ExecutionStackVisitor; + + impl<'de> Visitor<'de> for ExecutionStackVisitor { + type Value = ExecutionStack; + + fn expecting(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + fmt.write_str("Expecting a binary array or hex string") + } + + fn visit_str(self, v: &str) -> Result + where E: Error { + let bytes = from_hex(v).map_err(|e| E::custom(e.to_string()))?; + self.visit_bytes(&bytes) + } + + fn visit_string(self, v: String) -> Result + where E: Error { + self.visit_str(&v) + } + + fn visit_bytes(self, v: &[u8]) -> Result + where E: Error { + ExecutionStack::from_bytes(v).map_err(|e| E::custom(e.to_string())) + } + + fn visit_borrowed_bytes(self, v: &'de [u8]) -> Result + where E: Error { + self.visit_bytes(v) + } + } + if de.is_human_readable() { - de.deserialize_string(ScriptVisitor) + de.deserialize_string(ExecutionStackVisitor) } else { - de.deserialize_bytes(ScriptVisitor) + de.deserialize_bytes(ExecutionStackVisitor) } } } diff --git a/infrastructure/tari_script/src/stack.rs b/infrastructure/tari_script/src/stack.rs index 757988f9c3..f3b714b95c 100644 --- a/infrastructure/tari_script/src/stack.rs +++ b/infrastructure/tari_script/src/stack.rs @@ -17,7 +17,6 @@ use std::convert::TryFrom; -use serde::{Deserialize, Serialize}; use tari_crypto::ristretto::{pedersen::PedersenCommitment, RistrettoPublicKey, RistrettoSchnorr, RistrettoSecretKey}; use tari_utilities::{ hex::{from_hex, to_hex, Hex, HexError}, @@ -58,7 +57,7 @@ pub const TYPE_PUBKEY: u8 = 4; pub const TYPE_SIG: u8 = 5; pub const TYPE_SCALAR: u8 = 6; -#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +#[derive(Debug, Clone, PartialEq, Eq)] pub enum StackItem { Number(i64), Hash(HashValue), @@ -178,7 +177,7 @@ stack_item_from!(RistrettoPublicKey => PublicKey); stack_item_from!(RistrettoSchnorr => Signature); stack_item_from!(ScalarValue => Scalar); -#[derive(Debug, Default, Clone, PartialEq, Eq, Serialize, Deserialize)] +#[derive(Debug, Default, Clone, PartialEq, Eq)] pub struct ExecutionStack { items: Vec, } @@ -262,7 +261,7 @@ impl ExecutionStack { } /// Return a binary array representation of the input stack - pub fn as_bytes(&self) -> Vec { + pub fn to_bytes(&self) -> Vec { self.items.iter().fold(Vec::new(), |mut bytes, item| { item.to_bytes(&mut bytes); bytes @@ -317,7 +316,7 @@ impl Hex for ExecutionStack { } fn to_hex(&self) -> String { - to_hex(&self.as_bytes()) + to_hex(&self.to_bytes()) } } @@ -361,11 +360,21 @@ mod test { use tari_crypto::{ hash::blake2::Blake256, keys::{PublicKey, SecretKey}, - ristretto::{utils, utils::SignatureSet, RistrettoPublicKey, RistrettoSchnorr, RistrettoSecretKey}, + ristretto::{ + pedersen::PedersenCommitment, + utils, + utils::SignatureSet, + RistrettoPublicKey, + RistrettoSchnorr, + RistrettoSecretKey, + }, + }; + use tari_utilities::{ + hex::{from_hex, Hex}, + message_format::MessageFormat, }; - use tari_utilities::hex::{from_hex, Hex}; - use crate::{op_codes::ScalarValue, ExecutionStack, StackItem}; + use crate::{op_codes::ScalarValue, ExecutionStack, HashValue, StackItem}; #[test] fn as_bytes_roundtrip() { @@ -378,7 +387,7 @@ mod test { } = utils::sign::(&k, b"hi").unwrap(); let items = vec![Number(5432), Number(21), Signature(s), PublicKey(p)]; let stack = ExecutionStack::new(items); - let bytes = stack.as_bytes(); + let bytes = stack.to_bytes(); let stack2 = ExecutionStack::from_bytes(&bytes).unwrap(); assert_eq!(stack, stack2); } @@ -445,4 +454,37 @@ mod test { panic!("Expected scalar") } } + + #[test] + fn serde_serialization_non_breaking() { + const SERDE_ENCODED_BYTES: &str = "ce0000000000000006fdf9fc345d2cdd8aff624a55f824c7c9ce3cc9\ + 72e011b4e750e417a90ecc5da50456c0fa32558d6edc0916baa26b48e745de834571534ca253ea82435f08ebbc\ + 7c0556c0fa32558d6edc0916baa26b48e745de834571534ca253ea82435f08ebbc7c6db1023d5c46d78a97da8eb\ + 6c5a37e00d5f2fee182dcb38c1b6c65e90a43c10906fdf9fc345d2cdd8aff624a55f824c7c9ce3cc972e011b4e7\ + 50e417a90ecc5da501d2040000000000000356c0fa32558d6edc0916baa26b48e745de834571534ca253ea82435\ + f08ebbc7c"; + let p = + RistrettoPublicKey::from_hex("56c0fa32558d6edc0916baa26b48e745de834571534ca253ea82435f08ebbc7c").unwrap(); + let s = + RistrettoSecretKey::from_hex("6db1023d5c46d78a97da8eb6c5a37e00d5f2fee182dcb38c1b6c65e90a43c109").unwrap(); + let sig = RistrettoSchnorr::new(p.clone(), s); + let m: HashValue = Blake256::digest(b"Hello Tari Script").into(); + let s: ScalarValue = m; + let commitment = PedersenCommitment::from_public_key(&p); + + // Includes all variants for StackItem + let mut expected_inputs = inputs!(s, p, sig, m, 1234, commitment); + let stack = ExecutionStack::from_binary(&from_hex(SERDE_ENCODED_BYTES).unwrap()).unwrap(); + + for (i, item) in stack.items.into_iter().enumerate().rev() { + assert_eq!( + item, + expected_inputs.pop().unwrap(), + "Stack items did not match at index {}", + i + ); + } + + assert!(expected_inputs.is_empty()); + } } diff --git a/infrastructure/test_utils/Cargo.toml b/infrastructure/test_utils/Cargo.toml index 809e2b0f67..9a6262255a 100644 --- a/infrastructure/test_utils/Cargo.toml +++ b/infrastructure/test_utils/Cargo.toml @@ -1,7 +1,7 @@ [package] name = "tari_test_utils" description = "Utility functions used in Tari test functions" -version = "0.38.5" +version = "0.38.7" authors = ["The Tari Development Community"] edition = "2018" license = "BSD-3-Clause" diff --git a/integration_tests/config/config.toml b/integration_tests/config/config.toml deleted file mode 100644 index 569d3b05c8..0000000000 --- a/integration_tests/config/config.toml +++ /dev/null @@ -1,380 +0,0 @@ -######################################################################################################################## -# # -# Common Configuration Options # -# # -######################################################################################################################## - -[common] -#override_from="dibbler" -#base_path="/.tari" -#data_dir="data" - -[auto_update] -# This interval in seconds to check for software updates. Setting this to 0 disables checking. -check_interval = 300 - -[dibbler.auto_update] -# Customize the hosts that are used to check for updates. These hosts must contain update information in DNS TXT records. -update_uris = ["updates.dibbler.taripulse.com"] -# Customize the location of the update SHA hashes and maintainer-signed signature. -# "auto_update.hashes_url" = "https://
/hashes.txt" -# "auto_update.hashes_sig_url" = "https://
/hashes.txt.sig" - -[metrics] -# server_bind_address = "127.0.0.1:5577" -# push_endpoint = http://localhost:9091/metrics/job/base-node -# Configuration options for dibbler testnet - -[dibbler.p2p.seeds] -dns_seeds = ["seeds.dibbler.tari.com"] -peer_seeds = [ - # 333388d1cbe3e2bd17453d052f - "c2eca9cf32261a1343e21ed718e79f25bfc74386e9305350b06f62047f519347::/onion3/6yxqk2ybo43u73ukfhyc42qn25echn4zegjpod2ccxzr2jd5atipwzqd:18141", - # 555575715a49fc242d756e52ca - "42fcde82b44af1de95a505d858cb31a422c56c4ac4747fbf3da47d648d4fc346::/onion3/2l3e7ysmihc23zybapdrsbcfg6omtjtfkvwj65dstnfxkwtai2fawtyd:18141", - # 77771f53be07fab4be5f1e1ff7 - "50e6aa8f6c50f1b9d9b3d438dfd2a29cfe1f3e3a650bd9e6b1e10f96b6c38f4d::/onion3/7s6y3cz5bnewlj5ypm7sekhgvqjyrq4bpaj5dyvvo7vxydj7hsmyf5ad:18141", - # 9999016f1f3a6162dddf5a45aa - "36a9df45e1423b5315ffa7a91521924210c8e1d1537ad0968450f20f21e5200d::/onion3/v24qfheti2rztlwzgk6v4kdbes3ra7mo3i2fobacqkbfrk656e3uvnid:18141", - # bbbb8358387d81c388fadb4649 - "be128d570e8ec7b15c101ee1a56d6c56dd7d109199f0bd02f182b71142b8675f::/onion3/ha422qsy743ayblgolui5pg226u42wfcklhc5p7nbhiytlsp4ir2syqd:18141", - # eeeeb0a943ed143e613a135392 - "3e0321c0928ca559ab3c0a396272dfaea705efce88440611a38ff3898b097217::/onion3/sl5ledjoaisst6d4fh7kde746dwweuge4m4mf5nkzdhmy57uwgtb7qqd:18141", - # 66664a0f95ce468941bb9de228 - "b0f797e7413b39b6646fa370e8394d3993ead124b8ba24325c3c07a05e980e7e::/ip4/35.177.93.69/tcp/18189", - # 22221bf814d5e524fce9ba5787 - "0eefb45a4de9484eca74846a4f47d2c8d38e76be1fec63b0112bd00d297c0928::/ip4/13.40.98.39/tcp/18189", - # 4444a0efd8388739d563bdd979 - "544ed2baed414307e119d12894e27f9ddbdfa2fd5b6528dc843f27903e951c30::/ip4/13.40.189.176/tcp/18189" -] - -######################################################################################################################## -# # -# Base Node Configuration Options # -# # -######################################################################################################################## - -# If you are not running a Tari Base node, you can simply leave everything in this section commented out. Base nodes -# help maintain the security of the Tari token and are the surest way to preserve your privacy and be 100% sure that -# no-one is cheating you out of your money. - -[base_node] -# Selected network -network = "dibbler" -# The socket to expose for the gRPC base node server -grpc_address = "/ip4/127.0.0.1/tcp/18142" - -# Spin up and use a built-in Tor instance. This only works on macos/linux and you must comment out tor_control_address below. -# This requires that the base node was built with the optional "libtor" feature flag. -#use_libtor = true - -[dibbler.base_node] -# A path to the file that stores your node identity and secret key -identity_file = "config/base_node_id_dibbler.json" - -[base_node.p2p] -# The node's publicly-accessible hostname. This is the host name that is advertised on the network so that -# peers can find you. -# _NOTE_: If using the `tor` transport type, public_address will be ignored and an onion address will be -# automatically configured -public_address = "/ip4/172.2.3.4/tcp/18189" - -# Optionally bind an additional TCP socket for inbound Tari P2P protocol commms. -# Use cases include: -# - allowing wallets to locally connect to their base node, rather than through tor, when used in conjunction with `tor_proxy_bypass_addresses` -# - multiple P2P addresses, one public over DNS and one private over TOR -# - a "bridge" between TOR and TCP-only nodes -# auxiliary_tcp_listener_address = "/ip4/127.0.0.1/tcp/9998" - -[base_node.p2p.transport] -# -------------- Transport configuration -------------- -# Use TCP to connect to the Tari network. This transport can only communicate with TCP/IP addresses, so peers with -# e.g. tor onion addresses will not be contactable. -#transport = "tcp" -# The address and port to listen for peer connections over TCP. -tcp.listener_address = "/ip4/0.0.0.0/tcp/18189" -# Configures a tor proxy used to connect to onion addresses. All other traffic uses direct TCP connections. -# This setting is optional however, if it is not specified, this node will not be able to connect to nodes that -# only advertise an onion address. -tcp.tor_socks_address = "/ip4/127.0.0.1/tcp/36050" -tcp.tor_socks_auth = "none" - -# # Configures the node to run over a tor hidden service using the Tor proxy. This transport recognises ip/tcp, -# # onion v2, onion v3 and dns addresses. -#type = "tor" -# Address of the tor control server -tor.control_address = "/ip4/127.0.0.1/tcp/9051" -# Authentication to use for the tor control server -tor.control_auth = "none" # or "password=xxxxxx" -# The onion port to use. -tor.onion_port = 18141 -# When these peer addresses are encountered when dialing another peer, the tor proxy is bypassed and the connection is made -# directly over TCP. /ip4, /ip6, /dns, /dns4 and /dns6 are supported. -tor.proxy_bypass_addresses = [] -#tor.proxy_bypass_addresses = ["/dns4/my-foo-base-node/tcp/9998"] -# When using the tor transport and set to true, outbound TCP connections bypass the tor proxy. Defaults to false for better privacy -tor.proxy_bypass_for_outbound_tcp = false - -# Use a SOCKS5 proxy transport. This transport recognises any addresses supported by the proxy. -#type = "socks5" -# The address of the SOCKS5 proxy -# Traffic will be forwarded to tcp.listener_address -socks.proxy_address = "/ip4/127.0.0.1/tcp/9050" -socks.auth = "none" # or "username_password=username:xxxxxxx" - -[base_node.p2p.dht] -auto_join = true -database_url = "base_node_dht.db" -# do we allow test addresses to be accepted like 127.0.0.1 -allow_test_addresses = false - -[base_node.p2p.dht.saf] - -[base_node.lmdb] -#init_size_bytes = 1000000 -#grow_size_bytes = 1600000 -#resize_threshold_bytes = 1600000 - -[base_node.storage] -# Sets the pruning horizon. -#pruning_horizon = 0 -# Set to true to record all reorgs. Recorded reorgs can be viewed using the list-reorgs command. -track_reorgs = true - -######################################################################################################################## -# # -# Wallet Configuration Options # -# # -######################################################################################################################## - -[wallet] -# Override common.network for wallet -override_from = "dibbler" - -# The relative folder to store your local key data and transaction history. DO NOT EVER DELETE THIS FILE unless you -# a) have backed up your seed phrase and -# b) know what you are doing! -db_file = "wallet/wallet.dat" - -# The socket to expose for the gRPC wallet server. This value is ignored if grpc_enabled is false. -grpc_address = "/ip4/127.0.0.1/tcp/18143" - -# Console wallet password -# Should you wish to start your console wallet without typing in your password, the following options are available: -# 1. Start the console wallet with the --password=secret argument, or -# 2. Set the environment variable TARI_WALLET_PASSWORD=secret before starting the console wallet, or -# 3. Set the "password" key in this [wallet] section of the config -# password = "secret" - -# WalletNotify -# Allows you to execute a script or program when these transaction events are received by the console wallet: -# - transaction received -# - transaction sent -# - transaction cancelled -# - transaction mined but unconfirmed -# - transaction mined and confirmed -# An example script is available here: applications/tari_console_wallet/src/notifier/notify_example.sh -# notify = "/path/to/script" - -# This is the timeout period that will be used to monitor TXO queries to the base node (default = 60). Larger values -# are needed for wallets with many (>1000) TXOs to be validated. -#base_node_query_timeout = 180 -# The amount of seconds added to the current time (Utc) which will then be used to check if the message has -# expired or not when processing the message (default = 10800). -#saf_expiry_duration = 10800 -# This is the number of block confirmations required for a transaction to be considered completely mined and -# confirmed. (default = 3) -#transaction_num_confirmations_required = 3 -# This is the timeout period that will be used for base node broadcast monitoring tasks (default = 60) -#transaction_broadcast_monitoring_timeout = 180 -# This is the timeout period that will be used for chain monitoring tasks (default = 60) -#transaction_chain_monitoring_timeout = 60 -# This is the timeout period that will be used for sending transactions directly (default = 20) -#transaction_direct_send_timeout = 180 -# This is the timeout period that will be used for sending transactions via broadcast mode (default = 60) -#transaction_broadcast_send_timeout = 180 -# This is the size of the event channel used to communicate transaction status events to the wallet's UI. A busy console -# wallet doing thousands of bulk payments or used for stress testing needs a fairly big size (>10000) (default = 1000). -#transaction_event_channel_size = 25000 -# This is the size of the event channel used to communicate base node events to the wallet. A busy console -# wallet doing thousands of bulk payments or used for stress testing needs a fairly big size (>3000) (default = 250). -#base_node_event_channel_size = 3500 -# This is the size of the event channel used to communicate output manager events to the wallet. A busy console -# wallet doing thousands of bulk payments or used for stress testing needs a fairly big size (>3000) (default = 250). -#output_manager_event_channel_size = 3500 -# This is the size of the event channel used to communicate base node update events to the wallet. A busy console -# wallet doing thousands of bulk payments or used for stress testing needs a fairly big size (>300) (default = 50). -#base_node_update_publisher_channel_size = 500 -# If a large amount of tiny valued uT UTXOs are used as inputs to a transaction, the fee may be larger than -# the transaction amount. Set this value to `false` to allow spending of "dust" UTXOs for small valued -# transactions (default = true). -#prevent_fee_gt_amount = false -# This option specifies the transaction routing mechanism as being directly between wallets, making -# use of store and forward or using any combination of these. -# (options: "DirectOnly", "StoreAndForwardOnly", DirectAndStoreAndForward". default: "DirectAndStoreAndForward"). -#transaction_routing_mechanism = "DirectAndStoreAndForward" - -# When running the console wallet in command mode, use these values to determine what "stage" and timeout to wait -# for sent transactions. -# The stages are: -# - "DirectSendOrSaf" - The transaction was initiated and was accepted via Direct Send or Store And Forward. -# - "Negotiated" - The recipient replied and the transaction was negotiated. -# - "Broadcast" - The transaction was broadcast to the base node mempool. -# - "MinedUnconfirmed" - The transaction was successfully detected as mined but unconfirmed on the blockchain. -# - "Mined" - The transaction was successfully detected as mined and confirmed on the blockchain. - -# The default values are: "Broadcast", 300 -#command_send_wait_stage = "Broadcast" -#command_send_wait_timeout = 300 - -# The base nodes that the wallet should use for service requests and tracking chain state. -# base_node_service_peers = ["public_key::net_address", ...] -# base_node_service_peers = ["e856839057aac496b9e25f10821116d02b58f20129e9b9ba681b830568e47c4d::/onion3/exe2zgehnw3tvrbef3ep6taiacr6sdyeb54be2s25fpru357r4skhtad:18141"] - -# Configuration for the wallet's base node service -# The refresh interval, defaults to 10 seconds -#base_node_service_refresh_interval = 30 -# The maximum age of service requests in seconds, requests older than this are discarded -#base_node_service_request_max_age = 180 - -#[base_node.transport.tor] -#control_address = "/ip4/127.0.0.1/tcp/9051" -#control_auth_type = "none" # or "password" -# Required for control_auth_type = "password" -#control_auth_password = "super-secure-password" - -[wallet.p2p] - -[wallet.p2p.transport] -# # Configures the node to run over a tor hidden service using the Tor proxy. This transport recognises ip/tcp, -# # onion v2, onion v3 and dns addresses. -type = "tor" -# Address of the tor control server -tor.control_address = "/ip4/127.0.0.1/tcp/9051" -# Authentication to use for the tor control server -tor.control_auth = "none" # or "password=xxxxxx" -# The onion port to use. -tor.onion_port = 18141 -# When these peer addresses are encountered when dialing another peer, the tor proxy is bypassed and the connection is made -# directly over TCP. /ip4, /ip6, /dns, /dns4 and /dns6 are supported. -tor.proxy_bypass_addresses = [] -# When using the tor transport and set to true, outbound TCP connections bypass the tor proxy. Defaults to false for better privacy -tor.proxy_bypass_for_outbound_tcp = false - -[dibbler.wallet] -network = "dibbler" - - - -######################################################################################################################## -# # -# Miner Configuration Options # -# # -######################################################################################################################## - -[miner] -# Number of mining threads -# Default: number of logical CPU cores -#num_mining_threads=8 - -# GRPC address of base node -#base_node_grpc_address = "127.0.0.1:18142" - -# GRPC address of console wallet -#wallet_grpc_address = "127.0.0.1:18143" - -# Start mining only when base node is bootstrapped -# and current block height is on the tip of network -# Default: true -#mine_on_tip_only=true - -# Will check tip with node every N seconds and restart mining -# if height already taken and option `mine_on_tip_only` is set -# to true -# Default: 30 seconds -#validate_tip_timeout_sec=30 - -# Stratum Mode configuration -# mining_pool_address = "miningcore.tari.com:3052" -# mining_wallet_address = "YOUR_WALLET_PUBLIC_KEY" -# mining_worker_name = "worker1" - -######################################################################################################################## -# # -# Merge Mining Configuration Options # -# # -######################################################################################################################## - -[merge_mining_proxy] -#override_from = "dibbler" -monerod_url = [# stagenet - "http://stagenet.xmr-tw.org:38081", - "http://stagenet.community.xmr.to:38081", - "http://monero-stagenet.exan.tech:38081", - "http://xmr-lux.boldsuck.org:38081", - "http://singapore.node.xmr.pm:38081", -] -base_node_grpc_address = "/ip4/127.0.0.1/tcp/18142" -console_wallet_grpc_address = "/ip4/127.0.0.1/tcp/18143" - -# Address of the tari_merge_mining_proxy application -listener_address = "/ip4/127.0.0.1/tcp/18081" - -# In sole merged mining, the block solution is usually submitted to the Monero blockchain -# (monerod) as well as to the Tari blockchain, then this setting should be "true". With pool -# merged mining, there is no sense in submitting the solution to the Monero blockchain as the -# pool does that, then this setting should be "false". (default = true). -submit_to_origin = true - -# The merge mining proxy can either wait for the base node to achieve initial sync at startup before it enables mining, -# or not. If merge mining starts before the base node has achieved initial sync, those Tari mined blocks will not be -# accepted. (Default value = true; will wait for base node initial sync). -#wait_for_initial_sync_at_startup = true - -# Monero auth params -monerod_username = "" -monerod_password = "" -monerod_use_auth = false - -#[dibbler.merge_mining_proxy] -# Put any network specific settings here - - - -######################################################################################################################## -# # -# Validator Node Configuration Options # -# # -######################################################################################################################## - -[validator_node] - -phase_timeout = 30 - -# If set to false, there will be no scanning at all. -scan_for_assets = true -# How often do we want to scan the base layer for changes. -new_asset_scanning_interval = 10 -# If set then only the specific assets will be checked. -# assets_allow_list = [""] - - -constitution_auto_accept = false -constitution_management_polling_interval_in_seconds = 10 -constitution_management_polling_interval = 5 -constitution_management_confirmation_time = 50 -######################################################################################################################## -# # -# Collectibles Configuration Options # -# # -######################################################################################################################## - -[collectibles] -# GRPC address of validator node -#validator_node_grpc_address = "/ip4/127.0.0.1/tcp/18144" - -# GRPC address of base node -#base_node_grpc_address = "/ip4/127.0.0.1/tcp/18142" - -# GRPC address of wallet -#wallet_grpc_address = "/ip4/127.0.0.1/tcp/18143" diff --git a/integration_tests/cucumber.js b/integration_tests/cucumber.js index 544030439c..5b5dd3baf7 100644 --- a/integration_tests/cucumber.js +++ b/integration_tests/cucumber.js @@ -1,8 +1,7 @@ module.exports = { - default: - "--tags 'not @long-running and not @wallet-ffi and not @broken' --fail-fast", + default: "--tags 'not @long-running and not @wallet-ffi and not @broken' ", none: " ", - ci: "--tags '@critical and not @long-running and not @broken ' --fail-fast", + ci: "--tags '@critical and not @long-running and not @broken '", critical: "--format @cucumber/pretty-formatter --tags @critical", "non-critical": "--tags 'not @critical and not @long-running and not @broken'", diff --git a/integration_tests/features/BaseNodeAutoUpdate.feature b/integration_tests/features/BaseNodeAutoUpdate.feature deleted file mode 100644 index bc05149f8f..0000000000 --- a/integration_tests/features/BaseNodeAutoUpdate.feature +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2022 The Tari Project -# SPDX-License-Identifier: BSD-3-Clause - -@auto_update -Feature: AutoUpdate - - @broken - Scenario: Auto update finds a new update on base node - Given I have a node NODE_A with auto update enabled - Then NODE_A has a new software update - - @broken - Scenario: Auto update ignores update with invalid signature on base node - Given I have a node NODE_A with auto update configured with a bad signature - Then NODE_A does not have a new software update diff --git a/integration_tests/features/BaseNodeConnectivity.feature b/integration_tests/features/BaseNodeConnectivity.feature index 37300e227a..4dbd112c14 100644 --- a/integration_tests/features/BaseNodeConnectivity.feature +++ b/integration_tests/features/BaseNodeConnectivity.feature @@ -21,13 +21,11 @@ Feature: Base Node Connectivity Then SEED_A is connected to WALLET_A Scenario: Base node lists heights - Given I have 1 seed nodes - And I have a base node N1 connected to all seed nodes + Given I have a seed node N1 When I mine 5 blocks on N1 Then node N1 lists heights 1 to 5 Scenario: Base node lists headers - Given I have 1 seed nodes - And I have a base node BN1 connected to all seed nodes + Given I have a seed node BN1 When I mine 5 blocks on BN1 Then node BN1 lists headers 1 to 5 with correct heights diff --git a/integration_tests/features/WalletAutoUpdate.feature b/integration_tests/features/WalletAutoUpdate.feature deleted file mode 100644 index 2a9d89c000..0000000000 --- a/integration_tests/features/WalletAutoUpdate.feature +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2022 The Tari Project -# SPDX-License-Identifier: BSD-3-Clause - -@auto_update -Feature: AutoUpdate - - @broken - Scenario: Auto update finds a new update on wallet - Given I have a wallet WALLET with auto update enabled - Then WALLET has a new software update - - @broken - Scenario: Auto update ignores update with invalid signature on wallet - Given I have a wallet WALLET with auto update configured with a bad signature - Then WALLET does not have a new software update diff --git a/integration_tests/helpers/config.js b/integration_tests/helpers/config.js index a3396eb31d..51ae68f4a7 100644 --- a/integration_tests/helpers/config.js +++ b/integration_tests/helpers/config.js @@ -84,6 +84,9 @@ function baseEnvs(peerSeeds = [], forceSyncPeers = [], _committee = []) { ["localnet.base_node.p2p.dht.flood_ban_max_msg_count"]: "100000", ["localnet.base_node.p2p.dht.database_url"]: "localnet/dht.db", ["localnet.p2p.seeds.dns_seeds_use_dnssec"]: "false", + ["localnet.base_node.lmdb.init_size_bytes"]: 16000000, + ["localnet.base_node.lmdb.grow_size_bytes"]: 16000000, + ["localnet.base_node.lmdb.resize_threshold_bytes"]: 1024, ["localnet.wallet.identity_file"]: "walletid.json", ["localnet.wallet.contacts_auto_ping_interval"]: "5", @@ -101,9 +104,7 @@ function baseEnvs(peerSeeds = [], forceSyncPeers = [], _committee = []) { ["merge_mining_proxy.monerod_use_auth"]: false, ["merge_mining_proxy.monerod_username"]: "", ["merge_mining_proxy.monerod_password"]: "", - // ["localnet.base_node.storage_db_init_size"]: 100000000, - // ["localnet.base_node.storage.db_resize_threshold"]: 10000000, - // ["localnet.base_node.storage.db_grow_size"]: 20000000, + ["merge_mining_proxy.wait_for_initial_sync_at_startup"]: false, ["miner.num_mining_threads"]: "1", ["miner.mine_on_tip_only"]: true, diff --git a/integration_tests/package-lock.json b/integration_tests/package-lock.json index 2dd066682e..403f326a61 100644 --- a/integration_tests/package-lock.json +++ b/integration_tests/package-lock.json @@ -9,13 +9,18 @@ "version": "1.0.0", "license": "ISC", "dependencies": { + "@grpc/grpc-js": "^1.2.3", + "@grpc/proto-loader": "^0.5.5", "archiver": "^5.3.1", "axios": "^0.21.4", "clone-deep": "^4.0.1", "csv-parser": "^3.0.0", "dateformat": "^3.0.3", + "fs": "^0.0.1-security", "glob": "^7.2.3", + "grpc-promise": "^1.4.0", "json5": "^2.2.1", + "path": "^0.12.7", "sha3": "^2.1.3", "tari_crypto": "v0.14.0", "utf8": "^3.0.0", @@ -2332,6 +2337,11 @@ } } }, + "node_modules/fs": { + "version": "0.0.1-security", + "resolved": "https://registry.npmjs.org/fs/-/fs-0.0.1-security.tgz", + "integrity": "sha512-3XY9e1pP0CVEUCdj5BmfIZxRBTSDycnbqhIOGec9QYtmVH2fbLpj86CFWkrNOkt/Fvty4KZG5lTglL9j/gJ87w==" + }, "node_modules/fs-constants": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/fs-constants/-/fs-constants-1.0.0.tgz", @@ -3119,6 +3129,15 @@ "node": ">=6" } }, + "node_modules/path": { + "version": "0.12.7", + "resolved": "https://registry.npmjs.org/path/-/path-0.12.7.tgz", + "integrity": "sha512-aXXC6s+1w7otVF9UletFkFcDsJeO7lSZBPUQhtb5O0xJe8LtYhj/GxldoL09bBj9+ZmE2hNoHqQSFMN5fikh4Q==", + "dependencies": { + "process": "^0.11.1", + "util": "^0.10.3" + } + }, "node_modules/path-is-absolute": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", @@ -3187,6 +3206,14 @@ "node": ">=6.0.0" } }, + "node_modules/process": { + "version": "0.11.10", + "resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz", + "integrity": "sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==", + "engines": { + "node": ">= 0.6.0" + } + }, "node_modules/process-nextick-args": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", @@ -3832,6 +3859,14 @@ "resolved": "https://registry.npmjs.org/utf8/-/utf8-3.0.0.tgz", "integrity": "sha512-E8VjFIQ/TyQgp+TZfS6l8yp/xWppSAHzidGiRrqe4bK4XP9pTRyKFgGJpO3SN7zdX4DeomTrwaseCHovfpFcqQ==" }, + "node_modules/util": { + "version": "0.10.4", + "resolved": "https://registry.npmjs.org/util/-/util-0.10.4.tgz", + "integrity": "sha512-0Pm9hTQ3se5ll1XihRic3FDIku70C+iHUdT/W926rSgHV5QgXsYbKZN8MSC3tJtSkhuROzvsQjAaFENRXr+19A==", + "dependencies": { + "inherits": "2.0.3" + } + }, "node_modules/util-arity": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/util-arity/-/util-arity-1.1.0.tgz", @@ -3843,6 +3878,11 @@ "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==" }, + "node_modules/util/node_modules/inherits": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", + "integrity": "sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==" + }, "node_modules/uuid": { "version": "3.4.0", "integrity": "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==", @@ -5778,6 +5818,11 @@ "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.8.tgz", "integrity": "sha512-1x0S9UVJHsQprFcEC/qnNzBLcIxsjAV905f/UkQxbclCsoTWlacCNOpQa/anodLl2uaEKFhfWOvM2Qg77+15zA==" }, + "fs": { + "version": "0.0.1-security", + "resolved": "https://registry.npmjs.org/fs/-/fs-0.0.1-security.tgz", + "integrity": "sha512-3XY9e1pP0CVEUCdj5BmfIZxRBTSDycnbqhIOGec9QYtmVH2fbLpj86CFWkrNOkt/Fvty4KZG5lTglL9j/gJ87w==" + }, "fs-constants": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/fs-constants/-/fs-constants-1.0.0.tgz", @@ -6405,6 +6450,15 @@ "callsites": "^3.0.0" } }, + "path": { + "version": "0.12.7", + "resolved": "https://registry.npmjs.org/path/-/path-0.12.7.tgz", + "integrity": "sha512-aXXC6s+1w7otVF9UletFkFcDsJeO7lSZBPUQhtb5O0xJe8LtYhj/GxldoL09bBj9+ZmE2hNoHqQSFMN5fikh4Q==", + "requires": { + "process": "^0.11.1", + "util": "^0.10.3" + } + }, "path-is-absolute": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", @@ -6449,6 +6503,11 @@ "fast-diff": "^1.1.2" } }, + "process": { + "version": "0.11.10", + "resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz", + "integrity": "sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==" + }, "process-nextick-args": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", @@ -6956,6 +7015,21 @@ "resolved": "https://registry.npmjs.org/utf8/-/utf8-3.0.0.tgz", "integrity": "sha512-E8VjFIQ/TyQgp+TZfS6l8yp/xWppSAHzidGiRrqe4bK4XP9pTRyKFgGJpO3SN7zdX4DeomTrwaseCHovfpFcqQ==" }, + "util": { + "version": "0.10.4", + "resolved": "https://registry.npmjs.org/util/-/util-0.10.4.tgz", + "integrity": "sha512-0Pm9hTQ3se5ll1XihRic3FDIku70C+iHUdT/W926rSgHV5QgXsYbKZN8MSC3tJtSkhuROzvsQjAaFENRXr+19A==", + "requires": { + "inherits": "2.0.3" + }, + "dependencies": { + "inherits": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", + "integrity": "sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==" + } + } + }, "util-arity": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/util-arity/-/util-arity-1.1.0.tgz", diff --git a/package-lock.json b/package-lock.json index e2497b00af..30a0a96353 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,6 +1,6 @@ { "name": "tari", - "version": "0.38.5", + "version": "0.38.7", "lockfileVersion": 2, "requires": true, "packages": {}