Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bolt07: Adding P2P routing gossiping #1

Closed
wants to merge 3 commits into from

Conversation

cdecker
Copy link
Collaborator

@cdecker cdecker commented Nov 14, 2016

This is my first draft of the gossiping protocol. It implements a
staggered broadcast, and forces nodes to have a channel open in order to
get announcements forwarded.

All announcements are signed by the announcing node, the messages are
smaller than FLARE, has metadata for nodes and the signed state of the
announcing node can be reconstructed from the node table and channel
table that we need anyway. The downside is that we disseminate the
network topology to every participant in the network, but we should be
able to get up to a few tens of thousand nodes before this becomes an
issue.

We need to merge this with FLARE light: https://docs.google.com/document/d/1sgghubGy2X23CivR8ndUjDnGvCYczcPwbq8amrtyhDo/edit

[3:rgb_color]
[var:alias]
[4+X*46:channels]
[64:signature]</code></pre>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we only use a single signature generated using a node's identity public key, then it is possible for nodes to advertise non-canonical versions of the network topology by generating several ID's and re-advertising the channel with each.

In order to eliminate this possibility (if we deem it undesirable), then we can add a single signature to each of the advertised channels. This signature would be generated by adding the multi-sig channel key, and a node's identity channel key to create a channel authentication key. A signature over the channel description would then be generated using this key.

Nodes can verify this signature by performing public key recovery on the signature in order to yield the channel authentication key, then verify that the result of adding the multi-sig key for the channel and a node's identity key yields the same point. After this recovery process, signature validation continues as normal, with nodes rejecting the channel if the output referenced on the blockchain is spent, or the sig is invalid.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think at least in lightningd we currently always use the node's public identity key as output key in the anchor transaction, hence knowing the two keys and the format of the anchor transaction output script a node can go and check that one of the scripts in the hinted transaction has the expected format, i.e., matches the P2SH hash we'd expect.

This is clearly sub-optimal since we'd like to be able to switch the keys in the script, destroying that link. I like your construction. but I having to attach a signature for each channel would easily double the amount of bytes to transfer/store. Can we somehow aggregate the signatures?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, in lnd we currently generate a fresh key for the multi-sig output in each channel. Within the codebase the identity keys and multi-sig keys are completely separated with the rationale that in the future the multi-sig keys may be completely offline.

Yeh, that attaching a signature for each channel would indeed seriously blow up the size of the channel advertisements. However, since the advertising node controls the private key(s) used within all their funding outputs, and also their identity key, we can easily aggregate the signatures since nodes advertise info about their channel direction unilaterally.

In order to compress the advertisements, a can node add together the private keys of all multi-sig keys and their identity (reduced modulo N, where N is the order of the curve), then use that to generate a single signature over all the advertisements. To validate, node's would then similarly perform pub-key recovery, add together all the pubkeys contained int he advertisement, ensure that the sum of those keys is what has been recovered, then verify the signature, and finally ensure the outputs is still unspent. This allows us to maintain the size of the advertisements as you've currently drafted them.

I think this can also be adapted to be used for incremental updates, meaning an update the includes new channels that have been added.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well in that case we need to also include the per-channel pubkey for both endpoints, in order to link the channel announcement to the anchor TX, which means an additional 66 bytes per channel announcement, since they are completely detached from the ID key.

But the aggregation scheme is nice, let's keep that in mind (and let's mix in the ID key as well), though I'm not sure whether this'd allow a forwarding node to mix in old announcements with new ones if we go with the incremental version.

<p>If an announcement is not valid, it MUST be discarded, otherwise the node applies it to its local view of the topology: the receiving node removes all channels from its local view that match the <code>node_id</code> as the origin of the channel, i.e., all channels that have been previously announced by that node, and adds all channels in the announcement unless they have an <code>expiry</code> field of <code>0xFF</code>.</p>
<p>If, after applying the changes from the announcement, there are no channels associated with the announcing node, then the receiving node MAY purge the announcing node from the set of known nodes. Otherwise the receiving node updates the metadata and stores the signature associated with the announcement. This will later allow the receiving node to rebuild the announcement for its peers.</p>
<p>After processing the announcement the receiving node adds the announcement to a list of outgoing announcements. The list of outgoing announcement MUST NOT contain multiple announcements with the same <code>node_id</code>, duplicates MUST be removed and announcements with lower <code>timestamp</code> fields MUST be replaced. This list of outgoing announcements is flushed once every 60 seconds, independently of the arrival times of announcements, resulting in a staggered announcement and deduplication of announcements.</p>
<p>Nodes MAY re-announce their channels regularly, however this is discouraged in order to keep the resource requirements low. In order to bootstrap nodes that were not online at the time of the broadcast nodes will announce all known nodes and their associated channels at the time of connection establishment. The individual announcements can be reconstructed from the set of known nodes, containing the metadata and signatures for the announcements, and the routing table, containing the channel information. The broadcast is stopped after the first hop since the peers of the newly joined node already have the announcement and the timestamp check will fail.</p>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Announcing the full routing table upon each reconnection may become a bit large (hundreds of megs) as the network grows.

Instead, what if nodes collected the advertise channel state into a merkle tree. With this, upon reconnection nodes can exchange hashes then decide if they need to re-send the state or not. If the hashes differ, then the nodes can bisect the routing state tree in order to discover where their state diverged, sending the sub-tree(s) necessary for full synchronization.

One issue with the proposal above is that it requires nodes to maintain the exact same order w.r.t the routing state so the can reconstruct an identical graph topology root. To remedy this, we can use a merkle patricia tree instead. Such a data structure has a desirable trait in that the structure of the final tree doesn't depend on insertion order. Therefore, nodes can process updates in an arbitrary order, yet still conduct the same tree root in the end.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice, I like the patricia tree idea. That was exactly my worry with merkle tree, missing a node somewhere in the middle would invalidate all following entries as well.

I'd still keep a node and it's channels together as the leafs of the tree, simply because having a single signature cover the latest state forces some consistency and we can reuse the hash from the signature as the leaf entry.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh yeah, the node identity itself can be the key to the tree! This also localizes changes to a single leaf rather than possibly multiple sub-trees. We can re-use the scheme we discussed in the comment above to further authenticate changes to the tree.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bisecting is horrible for latency. Fortunately, the rsync algorithm solves this pretty well, and is easy to implement. You just need a canonical ordering of the data, which is trivial to come up with, and then you can do this in 1RTT.

<p>Peers in the network exchange <code>route_announcement</code> messages that contain information about a node and about its outgoing channels. The node creating an announcement is referred to as the <em>announcing node</em>. Announcements MUST have at least one associated channel whose existence can be proven by inspection of the blockchain, i.e., the anchor transaction creating the channel MUST have been confirmed.</p>
<p>Nodes receiving an announcement verify that this is the latest update from the announcing node by comparing the included timestamp and update their local view of the network's topology accordingly. The receiving node removes all channels that are signaled to be removed or are no longer included in the announcement, adds any new channels and adjusts the parameters of the existing channels. Notice that this specification does not include any mechanism for differential announcements, i.e., every announcement ships the entire final state for that node.</p>
<p>Once the announcement has been processed it is added to a list of outgoing announcements to the processing node's peers, which will be flushed at regular intervals. This store and delayed forward broadcast is called a <em>staggered broadcast</em></p>
<p>Notice that each announcement will only announce a single direction of the channel, i.e., the outgoing direction from the point of view of the announcing node. The other direction will be announced by the other endpoint of the channel. This reflects the willingness of the announcing node to forward payments over the announced channel.</p>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One down-side with this approach compared to the "both must advertise", is that we now have a good bit of redundant data flying around the network as each channel is essentially advertised twice: once by each node.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that is a bit unfortunate, however it also brings some advantages:

  • we require no coordination by the end-points to advertise a channel in one direction
  • we sidestep the whole business of having to a canonical direction ordering and having to associate parameters with one direction or the other, e.g., the fee from A to B are X, while from B to A the fees are Y, it is always clear which direction we are talking about
  • channels can be removed uncooperatively, e.g., one node dies and I still get forward requests for that channel hours later, and it allows for uncooperative parameter changes, e.g., the other endpoint does not sign an announcement in which we increase our fees

I think it's a tradeoff between bytes in flight (storage at node is identical without OWAS), and the simplicity and the advantages I mentioned.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's natural asymmetry here as you point out. Ideally both sides should have to advertise creation, and then either can shutdown. It's a little more complex, though:

  1. Channel announce. This contains both node ids, and the channel. Both node ids sign; you don't need to know anything about the node to validate and forward these. Conflicting (ie. different other than sigs) channel announces also get forwarded, and the channel and nodes blacklisted.
  2. Node announce. Only valid after channel announce has identified the node.
  3. Channel updates. These are one-sided. Only valid after channel announce.

On startup, you dump all the channel announce, all the node announces, and the latest channel update for each.

@cdecker
Copy link
Collaborator Author

cdecker commented Nov 15, 2016

Oops, it seems I uploaded the pandocd version of the markdown file. I'll commit the markdown source and squash upon merge. Rebase would kill inline comments...


The announcing node creates the message with the node's information and all its channels.
Normal removal of a channel is done by omitting the channel in the `channels` field.
Notice that this does not allow removing a channel if no active channels are left open, since an announcement requires at least one channel in the `channels` field to be valid.
Copy link
Collaborator

@pm47 pm47 Nov 15, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just allow messages with no channels?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is not so much accepting messages without channels the problem is that we'd have to forward them as well, flooding the network with messages that are basically free to create.

An attacker can just generate any number of node IDs and announce its existence. And we can't just drop empty announces for which we don't know the corresponding node, because other nodes may know it and require the removal message.

If we require a channel, even a closed one, he must at least make a bitcoin transaction, which makes the attack non-free. I agree that signaling via the expiry is a bit wonky, but keep in mind that this is voluntary signaling, we can still remove channels if the anchor TX gets spent (modulo effort of tracking those).


Peers in the network exchange `route_announcement` messages that contain information about a node and about its outgoing channels.
The node creating an announcement is referred to as the _announcing node_.
Announcements MUST have at least one associated channel whose existence can be proven by inspection of the blockchain, i.e., the anchor transaction creating the channel MUST have been confirmed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean that the transaction should have reached min_conf right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well we might just set this at 6 confirmations, which should be safe. Keep in mind though that the endpoints of the channel may have completely different policies, some accepting 0-conf anchors for channels and some being pedantic and waiting for 120 confirmations for example. It should be safe to just a lower bound enforced by nodes.

Copy link
Collaborator

@pm47 pm47 Nov 15, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 confirmations seems fine. My point was that if we just wait for one confirmation (which is what the spec seems to be saying) we might have issues with some nodes not having received the new block yet and discarding the announcement.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I'll add the 6 confirmations to the spec.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw I just realized that 0-conf anchors are not possible with ids based on block height 😁

The list of outgoing announcement MUST NOT contain multiple announcements with the same `node_id`, duplicates MUST be removed and announcements with lower `timestamp` fields MUST be replaced.
This list of outgoing announcements is flushed once every 60 seconds, independently of the arrival times of announcements, resulting in a staggered announcement and deduplication of announcements.

Nodes MAY re-announce their channels regularly, however this is discouraged in order to keep the resource requirements low.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a basic dos protection, shouldn't we just not propagate re-announcements?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I had a heartbeat in mind, allowing nodes to purge channels and nodes that it hasn't seen for a long time (24h?), but it might cause a lot of traffic.

We require timestamps to increase, so that re-announcements actually have to start from the announcing node. But dropping identical re-announcements is not hard, on the other hand it is trivial for the announcing node to slightly modify its announcement making it count as new.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, we will probably need some kind of rate limiting logic and blacklisting then, à la bitcoind.

@Roasbeef
Copy link
Collaborator

Roasbeef commented Nov 17, 2016

As the prior comment change was getting a bit buried as the commits were updated, I'd like to discuss an approach for efficiently synchronizing the routing table state between two nodes, recapping our discussions earlier for posterity.

Earlier up in the chain Christian and I were discussing possibly introducing a merkle particia tree over the entire/known routing state. Upon connect, nodes would then use this tree to compare root hashes. If the root hashes differ, then they would engage in a bisection protocol in order to find the point where their state diverged, sending the appropriate sub-tree(s) in conclusion. This approach is nice as with a single RTT both sides can determine if their state is synchronized, and since the structure of merkle patricia trees aren't dependent on the order of insertion, nodes don't need to agree on a canonical leaf sorting order as they would for merkle trees.

However, a major downside that Rusty pointed out is that the above protocol can lead to pretty poor latency to reach a synced state as it requires many round trips back and forth to reconcile the difference between the routing table state. It still may be worth looking into an authenticated data structure of the entire/known channel state as that might allow us to do some interesting things w.r.t to fragmentation/sharding of the state and associated updates (and I think it can be combined with what I'm about to suggest). As counter to eliminate the excessive latency, Rusty proposed that we use the rsync algorithm to find the difference in a single round-trip. I think we can take this a step further without getting too fancy.

Y'all remember IBLT's? I think they're perfectly applicable to the problem we're attempting to solve, namely: efficient set reconciliation. In our context, the set itself is the routing table. The reconciliation can be done in a single round-trip, if Alice (the initiator) doesn't have any entries that Bob isn't aware of and in 1.5 RTT's if Alice has entires that isn't present in Bob's routing table. The approach outlined in this paper (IBLT's plus a heuristic to estimate the size of the set difference) seems ripe for integrating into our system.

In the past Rusty did a bit of research into searching for optimal parameters applicable to the previous use-case: synchronizing the mempool between Bitcoin full-nodes. I think much of that research can be re-used as we figure out the correct parameterization for our use-case.

This can possibly be combined with the root state hash idea, to first compare the hashes of the root routing state, falling back to set reconciliation if the hashes don't match. The tree would be keyed by node pubkey (with the value being a commitment to the node's channel state, possibly also an authenticated tree), while the items inserted into the IBLT would be some digest of the channel authentication information. So...why not both?

This is my first draft of the gossiping protocol. It implements a
staggered broadcast, and forces nodes to have a channel open in order to
get announcements forwarded.

All announcements are signed by the announcing node, the messages are
smaller than FLARE, has metadata for nodes and the signed state of the
announcing node can be reconstructed from the node table and channel
table that we need anyway. The downside is that we disseminate the
network topology to every participant in the network, but we should be
able to get up to a few tens of thousand nodes before this becomes an
issue.

We need to merge this with FLARE light: https://docs.google.com/document/d/1sgghubGy2X23CivR8ndUjDnGvCYczcPwbq8amrtyhDo/edit
Otherwise the short `channel-id` may change in a reorg, and
double-spending would allow multiple announcements without added
cost. It also slightly favours resilient channels for transfer
forwarding. Thanks @pm47 for pointing this out.
@BitfuryLightning
Copy link
Contributor

@cdecker
We have some comments on routing PR:

  1. If node has a lot of channels than probably it closes/opens them
    frequently. That process will need significant bandwith usage, since
    message contains full RT. Sending diff is prefered in this case,
    because it will decrease bandwith significantly.
  2. Blockheight + transaction height is not unique in the case of
    forks, and using these as id is not that stable as using transaction's
    hash

So our propositions are:

  1. Send messages about only newly created channels. There is proposal
    in Bitfury's routing schema with ideas how to acheive that, you can
    use them. If you come up with any specs how to implement incremental
    changes - it would be very nice.
  2. Not send messages about closed channels because this can be
    determined from blockchain.
  3. Node retransmit message/part if it contained information not known
    to node before
  4. Use transaction hash instead pair blockheight + transaction height to make it more stable to blockchain forks.

What do you think?

@rustyrussell
Copy link
Collaborator

@Roasbeef Good thinking, please open a wishlist PR to track it. For the moment, let's just blast on opening.

@BitfuryLightning Yes, see my proposal, which was buried above, and contains three separate messages:

  1. Channel announce. This contains both node ids, and the channel. Both node ids sign; you don't need to know anything about the node to validate and forward these. Conflicting (ie. different other than sigs) channel announces also get forwarded, and the channel and nodes blacklisted.
  2. Node announce. Only valid after channel announce has identified the node.
  3. Channel updates. These are one-sided. Only valid after channel announce.

Happy with txid in channel announce. Happy to ignore closing for now, and assume everyone has UTXO set. Agreed with rexmit of previously unknown messages as simplest solution.

For the moment, just send entire state on first connection, otherwise changes since last connection. That will scale fine for the first few thousand nodes, while @Roasbeef specs up IBLT...

@cdecker
Copy link
Collaborator Author

cdecker commented Nov 20, 2016

Good points by @BitfuryLightning, let me try to address them as I originally thought about them.

In the serialized format the channels are very light, i.e., 46 bytes each, so a single TCP/UDP packet can transport about 30 channels with the minimal MTU of 1500. Upon updates we do not forward the entire routing table, but we do forward all channels of a single node, so the update size is proportional to the updating node, not the size of the network.

Regarding the frequency, I agree that large hubs with many connections will have many updates, that can't be avoided, and this full dump of the updating node's state is wasteful, however the staggering serves to aggregate these updates along the propagation path, by replacing superseded updates in the send buffer of forwarding nodes. The staggering delays the announcement by 1 minute * network diameter, however we also get rid of DoS attacks and the flapping node states that'd result from too frequent updates.

Finally, it is relatively easy for nodes to be out of sync with most differential synchronization mechanisms, which would then either revert to a full dump or a more involved resynchronization mechanism, taking a number of roundtrips to resolve. I simply feel that sending the node's infos as a whole is the most robust way of dealing with this currently, and if we eventually upgrade to a sophisticated differential synchronization mechanism we shouldn't bother optimizing this first iteration at all, after all the nodes will be small for now, and we want to discourage large hubs anyway :-)

Re passive removal: I know I came up with it, but on further thought it's very ugly, sorry about that. It actually requires nodes to actively monitor the blockchain for spends of any anchor transaction they've seen so far, and it makes voluntary removals that do not result in the channel being torn down impossible. Take for example the case of a user that goes offline at night, he'll probably want to keep the channel established, and its peer is happy with that, but without voluntary removals they'd either still get forwarding requests or they'd have to close the channel.

Re Short-ID: The channel needs to be established and considered open by both endpoints for it to be announced, this already requires a number of confirmations, and by @pm47's suggestion I added a min_conf of 6 for channels to be forwarded at all. I feel that the short-id is relatively stable, and a reorg would lead to receiving nodes to have to look up the TX in the blockchain anyway, so a changing short-ID is not that bad. It's a nice little trick to reduce the size of the updates, once we go for the differential schemes, and cuts down on redundant information. But I'm also happy to drop it if a majority feel it introduces too much complexity :-)

The much more pressing matter I think is the signed unit of information that we try to synchronize on the nodes and not so much how we do the synchronization. We want to have the information authenticated by the origin, i.e., the announcing node, so we probably want to have that state signed by it. This means we have to come up with a format of the information that we agree on for the foreseeable future, since any synchronization mechanism will have to be able to rebuild that format from its local information in order to verify the signature. We probably want the signature to commit to the state of the node as well as its channels. So we can either sign the entire state with a single signature, or we can sign each incremental change, or we can sign node info and individual channels independently. My proposal is into the direction of the first option, it has very few signatures and everyone can verify that its local state is either in sync with what the node announced or not. This needs to be addressed now, and it will have an influence on how we build the remaining protocol, e.g., @Roasbeef's comment about the anchor pubkeys not being related to the node ID will require us to add more info in that serialization format. What do you guys think?

@rustyrussell
Copy link
Collaborator

OK, I typed up my counter-proposal, stealing lots of content from here, see #11

@pm47 pm47 mentioned this pull request Nov 21, 2016
rustyrussell added a commit that referenced this pull request Nov 22, 2016
So far, it's the only variable-length field we have in the protocol,
so that weighs me in the direction of simply nailing it.

Although a protocol violation, you could send a longer message: all
nodes will ignore the extra bytes unless we ever extend the MSG_ERROR
definition to add a field, which I can't see happening.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@cdecker
Copy link
Collaborator Author

cdecker commented Nov 22, 2016

Closing in favor of #11

@Roasbeef Roasbeef mentioned this pull request Sep 15, 2019
t-bast pushed a commit that referenced this pull request May 25, 2020
* Rename all the 'varint' to 'bigsize'.

Having both is confusing; we chose the name bigsize, so use it
explicitly.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

* BOLT 7: use `byte` instead of `u8`.

`u8` isn't a type; see BOLT #1 "Fundamental Types".

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

* BOLT 1: promote bigsize to a Fundamental Type.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
arik-so pushed a commit to arik-so/bolts that referenced this pull request Oct 18, 2022
jrakibi added a commit to jrakibi/bolts that referenced this pull request Feb 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants