Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RT connectivity changes #482

Merged
merged 1 commit into from
Mar 10, 2020
Merged

RT connectivity changes #482

merged 1 commit into from
Mar 10, 2020

Conversation

aarshkshah1992
Copy link
Contributor

@aarshkshah1992 aarshkshah1992 commented Mar 6, 2020

@Stebalien @aschmahmann @willscott

For #283, #434 & libp2p/go-libp2p#779.

Corresponding Routing Table PR:
libp2p/go-libp2p-kbucket#54

Closed #465 in favour of this.

@aschmahmann aschmahmann changed the base branch from feat/mode-switching to cypress March 6, 2020 20:26
dht_bootstrap.go Outdated Show resolved Hide resolved
dht_bootstrap.go Show resolved Hide resolved
dht_bootstrap.go Outdated Show resolved Hide resolved
dht_test.go Show resolved Hide resolved
opts/options.go Outdated Show resolved Hide resolved
@Stebalien
Copy link
Member

(looks good otherwise)

notify_test.go Outdated Show resolved Hide resolved
@@ -17,7 +17,7 @@ var DefaultBootstrapPeers []multiaddr.Multiaddr

// Minimum number of peers in the routing table. If we drop below this and we
// see a new peer, we trigger a bootstrap round.
var minRTRefreshThreshold = 4
var minRTRefreshThreshold = 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aarshkshah1992 do you think there are reasons people may want to be able to configure this for themselves? No need to prematurely expose things, but when we start changing the constants around it's generally worth an evaluation

Copy link
Contributor Author

@aarshkshah1992 aarshkshah1992 Mar 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is too internal a value for users to want to/to be able to manipulate. IMO, we should not expose this as a functional option.

dht.go Outdated
return dht, nil
}

func makeRoutingTable(h host.Host, cfg *opts.Options) (*kb.RoutingTable, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason we're using *opts.Options instead of opts.Options? Seems like we may as well be consistent with makeDHT, so I'd make them both use either pointer or value semantics. Since we're only doing gets on a small data structure I figured use value semantics, but I could be swayed if you think this is better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't care either ways. Let's go with opts.Options.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

dht_bootstrap.go Outdated Show resolved Hide resolved
dht.routingTable.Update(p)
// peerFound signals the routingTable that we've found a peer that
// supports the DHT protocol.
func (dht *IpfsDHT) peerFound(ctx context.Context, p peer.ID) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aarshkshah1992 and I discussed this a bit offline. As long as the routing table has a bounded queue size for connected peers it is possible that a peer we are connected to will not be added to either the routing table or the replacement queue. If that occurs then we have no event (other than fixLowPeers if the entire routing table is almost completely empty) that cause us to re-add existing peers to the routing table/replacement queue.

Some fixes for this include:

  1. Maintaining state: e.g. allowing the replacement cache to contain an unbounded number of connected peers
  2. Polling state: loop through our connected peers more frequently (e.g. periodically, when a bucket goes from full to nothing in the replacement cache, etc.)

Copy link
Contributor Author

@aarshkshah1992 aarshkshah1992 Mar 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like the idea of an unbounded replacement cache as it will cause leaks for Cpls that don't see missing peers often.

2 sounds like a good idea. We should add a connected peer that is neither in the replacement cache nor in the RT to the cache when we drop a peer from the cache i.e. make it event driven. The DHT could pass in a callback to the RT to get a candidate for this purpose.

WDYT @Stebalien ? Should we do this right away ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd do this but punt it to a future patch. Could you file an issue?

My only concern is that we don't want to frequently loop over the entire list of connections if we happen to not know of any DHT server peers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's the issue:

libp2p/go-libp2p-kbucket#58

@Stebalien Stebalien added this to the Working Kademlia milestone Mar 10, 2020
@Stebalien Stebalien force-pushed the feat/rt-connectivity-changes branch from dcc2a47 to cf37fca Compare March 10, 2020 21:48
@Stebalien Stebalien merged commit fbb1b36 into cypress Mar 10, 2020
@Stebalien Stebalien deleted the feat/rt-connectivity-changes branch March 10, 2020 21:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants