-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relax closing fee requirements, implement quickclose #4599
Relax closing fee requirements, implement quickclose #4599
Conversation
Look good! Needs some testing, and we can really only support this with EXPERIMENTAL_FEATURES since it's not in the spec yet? |
OK, this turned into a pretty big rework! There's now a --experimental-quick-close option (enabled by default if it's an anchor_outputs channel, which is EXPERIMENTAL_FEATURES only anyway) which means we send the tlv. If we send and receive, we quick-close. On the way I found that our fee estimation for closing txs was not actually using the weight of the closing tx, but the final commitment tx. This is v. different for anchor_outputs! @niftynei might want me to break off the first commits for this release? |
3e3efac
to
2216824
Compare
2216824
to
daf2608
Compare
I'm having some trouble testing this on Can you help me figure out how to configure c-lightning to use higher on-chain fee estimates? Is there a configuration I can set to hard-code feerates on Note that it made me realize that I got something wrong in eclair: the node operator chooses a fee range when initiating shutdown. But if the remote peer rejects that fee range entirely (with a warning message + disconnect) eclair will just retry the same fee range on reconnection, and I didn't provide a hook for the node operator to send a This also highlights a difference in behavior between eclair and c-lightning where I'd like to have your feedback. In my opinion, it never makes sense when you're fundee to reject the funder's proposed fee range like c-lightning does. You're fundee so you're not paying that fee:
WDYT? |
Totally right about upper fee should be infinite. But lower fee risks them blocking you by creating low-fee children. So we do need a lower bound. Spec should note this! I'll come up with some language |
In our test suite we pretend to be bitcoind and feed fake fees that way, but you're right, we should just include an option for regtest fees. I'll add a feature request.
OK, I've fixed this so we allow infinite max if we're non-funder. We still require estimatefee 100 as a min though; could possibly drop it below that, but not to the relay minimum I think.
|
daf2608
to
71cf497
Compare
Thanks, I ran more tests with 71cf497 and here is what I found:
Note that when we're fundee and accept a fee from the funder's proposed fee range, I chose to answer with a fee range containing only that fee (in the example above [138, 138]) since we both agree on it. Let me know if you think that could cause internal issues for c-lightning (the fundee could instead choose to send back the same fee range it received from the funder, I'm not sure which makes more sense). |
OK, I'll fix the range to match (as the spec suggests, I was lazy!) I'll check the c-lightning-as-funder case using lnprototest, which will catch this weirdness immediately. Thanks! |
a626cb9
to
574f2b2
Compare
I have tested cross-compatibility between c-lightning 574f2b2 and eclair ACINQ/eclair@c37a2ca and everything looks good, thanks! |
Thankyou! Beers are owed. Given the number of beers I now owe you, I suspect we're going to have to catch up for several weeks somewhere! :) |
I'm down for that, it sounds like a very good plan, beers are owed both ways for everything you've done as well ;) |
396b13a
to
b40cbe0
Compare
b40cbe0
to
5e1b5b1
Compare
…nt secret. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This check is going away anyway (only Electrum enforced it), but we know that all wumbo peers expect large HTLCs to work today. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Changelog-Added: Protocol: Allow sending large HTLCs if peer offers option_support_large_channel (> 4294967295msat)
This includes the new bolt11 test vectors, and also removes the requirement that HTLCs be less than 2^32 msat. We keep that for now because Electrum enforced it on receive: in two releases we will stop that too. So no longer warn about needing mpp in that case either. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Changelog-Deprecated: Protocol: No longer restrict HTLCs to
…_tx. This touches a lot of text, mainly to change "if `option_anchor_outputs`" to "if `option_anchors`" Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It also gets rid of the requirement that close negotiation fee maximum is the old commitment transaction. We still do that, however, to avoid surprising old peers. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
663f4ab
to
0b6b40b
Compare
Based on a commit by @niftynei, but: - Separated quickclose logic from main loop. - I made it indep of anchor_outputs, use and option instead. - Disable if they've specified how to negotiate. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This follows lightning/bolts#847. For anchor_outputs, we pass down a max_feerate to closingd, and set the fee ceiling to MAX. It uses that to estimate the desired closing fee. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Changelog-EXPERIMENTAL: Anchor output mutual close allow a fee higher than the final commitment transaction (as per lightning-rfc ElementsProject#847)
This is now allowed for anchors (as per lightning/bolts#847). We need to play with feerates, since we don't put a discount on anchor commitments yet. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This affects the range we offer even without quick-close, but it's more critical for quick-close. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Changelog-Added: JSONRPC: `close` now takes a `feerange` parameter to set min/max fee rates for mutual close.
That was quick! We remove the 50% test, since the default is now to use quickclose. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Changelog-Added: Protocol: We now perform quick-close if the peer supports it.
0b6b40b
to
8a9e7e0
Compare
I need to create a fake account to Ack my PRs ;) Ack 8a9e7e0 |
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb: 0 destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94 1 0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237 2 0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402 3 0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486 4 0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543 5 0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192 Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a general `Connection to RPC server lost.` Now we test the more specific `Channel forgotten before proper close.`, which is good! BTW, this test was added recently in PR ElementsProject#4599.
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb: 0 destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94 1 0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237 2 0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402 3 0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486 4 0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543 5 0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192 Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a general `Connection to RPC server lost.` Now we test the more specific `Channel forgotten before proper close.`, which is good! BTW, this test was added recently in PR ElementsProject#4599.
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb: 0 destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94 1 0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237 2 0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402 3 0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486 4 0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543 5 0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192 Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a general `Connection to RPC server lost.` Now we test the more specific `Channel forgotten before proper close.`, which is good! BTW, this test was added recently in PR ElementsProject#4599.
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb: 0 destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94 1 0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237 2 0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402 3 0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486 4 0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543 5 0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192 Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a general `Connection to RPC server lost.` Now we test the more specific `Channel forgotten before proper close.`, which is good! BTW, this test was added recently in PR ElementsProject#4599.
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb: 0 destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94 1 0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237 2 0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402 3 0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486 4 0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543 5 0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192 Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a general `Connection to RPC server lost.` Now we test the more specific `Channel forgotten before proper close.`, which is good! BTW, this test was added recently in PR ElementsProject#4599.
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb: 0 destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94 1 0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237 2 0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402 3 0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486 4 0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543 5 0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192 Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a general `Connection to RPC server lost.` Now we test the more specific `Channel forgotten before proper close.`, which is good! BTW, this test was added recently in PR ElementsProject#4599.
shutdown_subdaemons frees the channel and calls destroy_close_command_on_channel_destroy, see gdb: 0 destroy_close_command_on_channel_destroy (_=0x55db6ca38e18, cc=0x55db6ca43338) at lightningd/closing_control.c:94 1 0x000055db6a8181b5 in notify (ctx=0x55db6ca38df0, type=TAL_NOTIFY_FREE, info=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:237 2 0x000055db6a8186bb in del_tree (t=0x55db6ca38df0, orig=0x55db6ca38e18, saved_errno=0) at ccan/ccan/tal/tal.c:402 3 0x000055db6a818a47 in tal_free (ctx=0x55db6ca38e18) at ccan/ccan/tal/tal.c:486 4 0x000055db6a73fffa in shutdown_subdaemons (ld=0x55db6c8b4ca8) at lightningd/lightningd.c:543 5 0x000055db6a741098 in main (argc=21, argv=0x7ffffa3e8048) at lightningd/lightningd.c:1192 Before this PR, there was no io_loop after shutdown_subdaemons and client side raised a general `Connection to RPC server lost.` Now we test the more specific `Channel forgotten before proper close.`, which is good! BTW, this test was added recently in PR #4599.
Implementation of lightning/bolts#847 (Update: now merged!).
Depends on #4755.