Skip to content

Commit

Permalink
Batch on-chain claims more aggressively per channel
Browse files Browse the repository at this point in the history
When batch claiming was first added, it was only done so for claims
which were not pinnable, i.e. those which can only be claimed by us.

This was the conservative choice - pinning of outputs claimed by a batch
would leave the entire batch unable to confirm on-chain. However, if
pinning is considered an attack that can be executed with a high
probability of success, then there is no reason not to batch claims of
pinnable outputs together, separate from unpinnable outputs.

Whether specific outputs are pinnable can change over time - those that
are not pinnable will eventually become pinnable at the height at which
our counterparty can spend them. Outputs are treated as pinnable if
they're within `COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE` of that
height.

Aside from outputs being pinnable or not, locktimes are also a factor
for batching claims. HTLC-timeout claims have locktimes fixed by the
counterparty's signature and thus can only be aggregated with other
HTLCs of the same CLTV, which we have to check for.

The complexity required here is worth it - aggregation can save users a
significant amount of fees in the case of a force-closure, and directly
impacts the number of UTXOs needed as a reserve for anchors.

Co-authored-by: Matt Corallo <git@bluematt.me>
  • Loading branch information
wvanlint and TheBlueMatt committed Sep 25, 2024
1 parent a177511 commit f7e8c9f
Show file tree
Hide file tree
Showing 5 changed files with 824 additions and 668 deletions.
13 changes: 10 additions & 3 deletions lightning/src/chain/channelmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -223,9 +223,16 @@ impl_writeable_tlv_based!(HTLCUpdate, {
(4, payment_preimage, option),
});

/// If an HTLC expires within this many blocks, don't try to claim it in a shared transaction,
/// instead claiming it in its own individual transaction.
pub(crate) const CLTV_SHARED_CLAIM_BUFFER: u32 = 12;
/// If an output goes from claimable only by us to claimable by us or our counterparty within this
/// many blocks, we consider it pinnable for the purposes of aggregating claims in a single
/// transaction.
pub(crate) const COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE: u32 = 12;

/// When we go to force-close a channel because an HTLC is expiring, we should ensure that the
/// HTLC(s) expiring are not considered pinnable, allowing us to aggregate them with other HTLC(s)
/// expiring at the same time.
const _HTLCS_NOT_PINNABLE_ON_CLOSE: u32 = CLTV_CLAIM_BUFFER - COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE;

/// If an HTLC expires within this many blocks, force-close the channel to broadcast the
/// HTLC-Success transaction.
/// In other words, this is an upper bound on how many blocks we think it can take us to get a
Expand Down
4 changes: 2 additions & 2 deletions lightning/src/chain/onchaintx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -772,7 +772,7 @@ impl<ChannelSigner: EcdsaChannelSigner> OnchainTxHandler<ChannelSigner> {
for j in 0..i {
if requests[i].can_merge_with(&requests[j], cur_height) {
let merge = requests.remove(i);
if let Err(rejected) = requests[j].merge_package(merge) {
if let Err(rejected) = requests[j].merge_package(merge, cur_height) {
requests.insert(i, rejected);
} else {
break;
Expand Down Expand Up @@ -1104,7 +1104,7 @@ impl<ChannelSigner: EcdsaChannelSigner> OnchainTxHandler<ChannelSigner> {
OnchainEvent::ContentiousOutpoint { package } => {
if let Some(pending_claim) = self.claimable_outpoints.get(package.outpoints()[0]) {
if let Some(request) = self.pending_claim_requests.get_mut(&pending_claim.0) {
assert!(request.merge_package(package).is_ok());
assert!(request.merge_package(package, height).is_ok());
// Using a HashMap guarantee us than if we have multiple outpoints getting
// resurrected only one bump claim tx is going to be broadcast
bump_candidates.insert(pending_claim.clone(), request.clone());
Expand Down
Loading

0 comments on commit f7e8c9f

Please sign in to comment.