-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backfill gossip without buffering directly in LDK #1660
Backfill gossip without buffering directly in LDK #1660
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, basically :)
4aaa673
to
92b767b
Compare
Looks good! I'm happy with this after we get a second reviewer. Feel free to squash |
This consolidates our various checks on peer buffer space into the `Peer` impl itself, making the thresholds at which we stop taking various actions on a peer more readable as a whole. This commit was primarily authored by `Valentine Wallace <vwallace@protonmail.com>` with some amendments by `Matt Corallo <git@bluematt.me>`.
92b767b
to
1bd0303
Compare
Squashed without diff. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK after squash
Instead of backfilling gossip by buffering (up to) ten messages at a time, only buffer one message at a time, as the peers' outbound socket buffer drains. This moves the outbound backfill messages out of `PeerHandler` and into the operating system buffer, where it arguably belongs. Not buffering causes us to walk the gossip B-Trees somewhat more often, but avoids allocating vecs for the responses. While its probably (without having benchmarked it) a net performance loss, it simplifies buffer tracking and leaves us with more room to play with the buffer sizing constants as we add onion message forwarding which is an important win. Note that because we change how often we check if we're out of messages to send before pinging, we slightly change how many messages are exchanged at once, impacting the `test_do_attempt_write_data` constants.
0d2504d
to
86dcb5b
Compare
Squashed without further changes. |
Ah, I think a warning may have been introduced in
|
Yeah, that's from addressing one of my comments. TIL you can avoid the explicit diff --git a/lightning/src/routing/gossip.rs b/lightning/src/routing/gossip.rs
index cb49ab81..e91d251f 100644
--- a/lightning/src/routing/gossip.rs
+++ b/lightning/src/routing/gossip.rs
@@ -38,6 +38,7 @@ use io_extras::{copy, sink};
use prelude::*;
use alloc::collections::{BTreeMap, btree_map::Entry as BtreeEntry};
use core::{cmp, fmt};
+use core::ops::Bound;
use sync::{RwLock, RwLockReadGuard};
use core::sync::atomic::{AtomicUsize, Ordering};
use sync::Mutex;
@@ -342,14 +343,9 @@ where C::Target: chain::Access, L::Target: Logger
fn get_next_node_announcement(&self, starting_point: Option<&PublicKey>) -> Option<NodeAnnouncement> {
let nodes = self.network_graph.nodes.read().unwrap();
- let mut iter = if let Some(pubkey) = starting_point {
- let mut iter = nodes.range(NodeId::from_pubkey(pubkey)..);
- iter.next();
- iter
- } else {
- nodes.range::<NodeId, _>(..)
- };
- for (_, ref node) in iter {
+ let starting_bound = starting_point.map_or(
+ Bound::Unbounded, |pubkey| Bound::Excluded(NodeId::from_pubkey(pubkey)));
+ for (_, ref node) in nodes.range((starting_bound, Bound::Unbounded)) {
if let Some(node_info) = node.announcement_info.as_ref() {
if let Some(msg) = node_info.announcement_message.clone() {
return Some(msg);
|
7970f3d
86dcb5b
to
7970f3d
Compare
8ffaacb
7970f3d
to
8ffaacb
Compare
Squashed without further changes. |
It's more accurate to name it as dropping gossip broadcasts, as it won't drop all gossip. Also fix accidental flipped bool introduced in lightningdevkit#1660
Fixes a flipped bool that was introduced in lightningdevkit#1660
This is intended as a pre-req for #1604 with a few of the changes from #1604 pulled in. The commit description of the "main" commit follows, with a few other minor tweaks present as well.
Instead of backfilling gossip by buffering (up to) ten messages at
a time, only buffer one message at a time, as the peers' outbound
socket buffer drains. This moves the outbound backfill messages out
of
PeerHandler
and into the operating system buffer, where itarguably belongs.
Not buffering causes us to walk the gossip B-Trees somewhat more
often, but avoids allocating vecs for the responses. While its
probably (without having benchmarked it) a net performance loss, it
simplifies buffer tracking and leaves us with more room to play
with the buffer sizing constants as we add onion message forwarding
which is an important win.