Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(tests): limit number of nilchain payers in tests #36

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

andreasbros
Copy link
Member

@andreasbros andreasbros commented Feb 20, 2025

Motivation

There is a need to limit the number of payers (blockchain payment accounts) to be created when running functional tests. Right now fixture creates 100 of them.

Solution

This change introduces a pool-based approach using ManagedNillionChainClientPayer that creates a fixed number of payer instances (limited by MAX_PAYERS_NUM) and hands them out in a round-robin fashion via an atomic counter.

If multiple tests end up sharing the same payer concurrently, the call submit_payment() is called concurrently but is guarded by an async mutex (inner), only one task acquires the lock at a time, effectively serialising the payment submissions.

Note: MAX_PAYERS_NUM is set to 2, and this doesn't affect func tests peformance locally - they finish in 12 - 20 seconds.

Fixes #
Design discussion issue (if applicable) #

Merge requirement checklist

  • CONTRIBUTING guidelines followed
  • Unit tests added/updated (if applicable)
  • Breaking change analysis completed (if applicable). "Will this change require all network cluster operators to update? Does it break public APIs?"
  • For new features or breaking changes, created a documentation issue in nillion-docs

@andreasbros andreasbros marked this pull request as draft February 20, 2025 18:58
@andreasbros andreasbros force-pushed the feat/limit-payers-num-in-tests branch 2 times, most recently from 2a38eb9 to 14f597e Compare February 20, 2025 19:05
@andreasbros andreasbros force-pushed the feat/limit-payers-num-in-tests branch 3 times, most recently from 455ec7f to 4edc596 Compare February 21, 2025 00:36
@andreasbros andreasbros marked this pull request as ready for review February 21, 2025 01:58
@andreasbros andreasbros force-pushed the feat/limit-payers-num-in-tests branch from 4edc596 to 965c22b Compare February 21, 2025 12:34
@@ -207,7 +207,7 @@ pub struct Nodes {
stash_client: tokio::sync::Mutex<NillionChainClient>,
next_payment_key_id: AtomicU64,
next_signing_key_id: AtomicU64,
funded_payers: tokio::sync::Mutex<Vec<NillionChainClientPayer>>,
funded_payers: ManagedNillionChainClientPayer,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the reasoning behind moving this logic to a new type? This pooling logic could have been here right? You essentially moved the existing funded_payers to a new type.

use tracing_fixture::{tracing, Tracing};
use xshell::{cmd, Shell};

const PAYER_FUND_CHUNK: usize = 20;
pub const MAX_PAYERS_NUM: usize = 2;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea here was to fund lots of payers in a single TX to speed tests up. If this doesn't slow anything down we might as well have a single payer and get rid of all this pooling logic? e.g. if going from 40+ payers to 2 doesn't slow it down then making it go down to 1 won't either.

I think this was added because of bors r+ tests though, so you likely won't see the slowdown locally. Did you see how much slower it is there?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some test(s) failed when running all of them with 1 payer, i think some tests require two payers?, but let me double check that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants