Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add test for aggregated revoked HTLC claim on anchors channel #2034
Add test for aggregated revoked HTLC claim on anchors channel #2034
Changes from all commits
cfa8941
e7fb47b
1958626
7c446b4
1638c8b
7b9c28a
4be56b9
2cc48c5
881656b
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I think this is totally fine, but what's the worst-case here? Can we get a combinitorial explosion from the ~800 HTLCs in a state? In practice we rarely combine HTLC claims, but let's say we have one HTLC claim which is soon and 800 which are a ways away, then we could combine them into one claim, then then the counterparty could peel them off by claiming on-chain one at a time? That's still only 800 entries which is fine, is there a way for them to somehow get N^2 behavior or worse?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I was confirming the runtime cost here, I realized that
pending_claim_requests
keeps a bunch of stale entries if something like this ends up happening because we don't clear the previous claim (now invalid since the counterparty claimed at least one of the same outputs as us). The reason for this is to merge the output the counterparty claimed back into it's original claim if it's reorged out of the chain.So if we aggregate the HTLCs to timeout into a single claim, and the counterparty claims them one-by-one with the preimage, we'll end up with a new entry in
pending_claim_requests
per HTLC the counterparty claimed, but only one entry for the latest claim inpending_claim_events
. The reason the events don't grow linearly in this case is because we can be more aggressive removing them since they do not represent the "main" state of a claim/request, they're simply a proxy to the consumer's claim.pending_claim_requests
(483 HTLCs)pending_claim_events
(483 HTLCs)pending_claim_requests
(first 483 HTLCs (stale), second 482 HTLCs)pending_claim_events
(482 HTLCs)pending_claim_requests
(first 483 HTLCs (stale), second 482 HTLCs (stale), third 481 HTLCs)pending_claim_events
(481 HTLCs)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah! indeed, I misread, yea, okay, so this doesn't even grow at all, cool. FWIW, I'm not at all worried about one-entry-per-HTLC (isnt it 966 - 483 max per direction?) , only if we get to N^3 or something stupid we need to start worrying about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think it's 483 as we shouldn't aggregate in a single
pending_claim_events
offered outputs (timeout path for us) and received outputs (preimage path for us). The runtime cost reasoning sounds correct, it's like at worst 482pending_claim_requests
entries + 1pending_claim_events
iiuc.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can document the above reasoning near
pending_claim_events
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still re-generate adequate
ClaimEvent
L965 inblock_disconnected
though I'm not sure if we have test coverage for both post-anchor bump HTLC/bump commitment ? Or at least doesn't seem so from a look inmonitor_tests
,reorg_tests
andfunctional_tests
. Can be added in future PRs to avoid delaying more this one.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I definitely plan on improving the test coverage, this PR is mostly just about addressing the revoked commitment case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I apologize for being dumb, but what's the benefit of cfg(debug_assertions) with an assert over debug_assert with no config flag? Is it just an optimization to not do the counting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we only want to run this code on
debug_assertions
which is active on tests.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this sentence hurts my head lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if there is an overlap between people working on this code and people who haven't memorized the meaning of the second bit in the channel update message, this needs a comment. However, I'm not sure such an overlap exists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just look at the spec 😛 It's checking if the channel is marked disabled once closed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or… we could use a constant instead of a magic 2. I know it's just used in tests, but given it's a test utility and not an individual test, might be some merit to not forcing readers to look up the spec. I won't block the PR over this though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a point in making sure that the handle error message is odd?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Their order doesn't really matter – one message is at the gossip level, the other at the peer/channel level.