-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Private reachability for Public node #3087
Comments
The node may be behind a firewall. Is it possible to enable prometheus metrics? |
Yes, we do have a firewall for security reasons, but we have enabled all the ports we are aware of.
Sure, I just enabled it, and I believe this is what you're looking for: Prometheus Metrics. |
You need to open the port that the node is listening on. From the metrics, I see a lot of your requested dailbacks errored. I also see that in the last few hours, the node has not made any dial back requests, which is very strange because I see some peers in swarm. Maybe none of them support autonat protocol? If you're confident that the node is private, you can use https://github.com/libp2p/go-libp2p/blob/master/options.go#L349 |
Thanks for time to check and identify the issue. Please consider these is not a test network and we can't easily make any changes for debugged purpose. |
Which port? |
Probably we have at least 6 (The nodes we have configured) |
The port is determined based on how you configure the libp2p host. For example, this is how you listen on port 9000 go-libp2p/examples/libp2p-host/host.go Lines 60 to 63 in 4e85c96
|
Ps. for quic web transport and WebRTC you can't reuse the same port (in this example port Ps. Ps. like 2color mentioned, you will need to open both TCP & UDP protocols, since Quic is using UDP. Issue can be closed right? |
I'm not exactly sure of the reason, but I can confirm that this issue happened in the latest LibP2P versions. We never faced this problem before. (v0.36.5 or before) Public nodes are still showing as private. Take a look here: https://bootstrap1.pactus.org/node |
OK, I'm not familiar with Pactus to be fully honest, however I did a simple telnet 45.118.133.17 21888
Trying 45.118.133.17...
Connected to 45.118.133.17.
Escape character is '^]'.
Connection closed by foreign host. I could imagine this kind of behaviour can lead to IPFS AutoNat to misjudge the network setup and keep set the reachability to And like sukunrt said before, you can try to set an option to libp2p to force itself to be "Public" using p2pHost, err := libp2p.New(
libp2p.ForceReachabilityPublic(),
....,
) For example. Did you already tried that in your code? |
No, because that is a hack. It may cover the issue but not fix it. |
ok, and what about my telnet connection closed issue? Furthermore, IPv6 address Are you really sure your configured IPv6 correctly in your firewall? |
Melror, thanks for the information provided! It's great to see that you are going to investigate this problem. Whatever the outcome, it will definitely help the LibP2P project, either by identifying a bug or improving the documentation to prevent future mistake by other projects. I am not really good with networking tools and debugging. I wrote a Python script to check the reachability status of some nodes whose addresses we have (as bootstrappers). Most of them have not enabled their gRPC, but some have, so we were able to get some results. python3 ./examples/pactus_reachability.py
Reachability status of bootstrap1.pactus.org:50051 is Private
Reachability status of bootstrap2.pactus.org:50051 is Private
Reachability status of bootstrap3.pactus.org:50051 is Public
Reachability status of bootstrap4.pactus.org:50051 is Public
Reachability status of pactus-bootstrap.sensifai.com:50051 is Private
Reachability status of 65.108.211.187:50051 is Private
Reachability status of 116.203.238.65:50051 is Private
Reachability status of node1.javad.dev:50051 is Private
Reachability status of pactus1.guanqian.me:50051 is Private
Reachability status of pactus2.guanqian.me:50051 is Private
Reachability status of 144.76.70.103:50051 is Private
Reachability status of 185.233.107.64:50051 is Private
Reachability status of 193.22.146.101:50051 is Private
Reachability status of 188.245.82.138:50051 is Private
Reachability status of 178.63.196.133:50051 is Private
Reachability status of 188.245.64.103:50051 is Private
Reachability status of 37.120.173.43:48844 is Private
Reachability status of 103.156.0.148:50051 is Private
Reachability status of 1.53.252.54:50051 is Private
Reachability status of 103.35.64.107:50051 is Public
Reachability status of 116.203.238.65:50051 is Private
Reachability status of 103.35.64.107:50051 is Public
Reachability status of 135.181.255.172:50051 is Public
Reachability status of 103.42.117.229:50051 is Private
Reachability status of 65.108.211.187:50051 is Private
Reachability status of 49.13.202.219:50051 is Unknown
Reachability status of 116.202.24.60:50051 is Private
Reachability status of 188.121.116.247:50051 is Private See, most of them are private.
pactus_reachability.py: # How to run:
# Install Pactus Python SDK: `pip install pactus-sdk`
# Execute the script: `python3 pactus_reachability.py`
#
from pactus.rpc.network_pb2_grpc import NetworkStub
from pactus.rpc.network_pb2 import GetNodeInfoRequest
import grpc
public_node = [
"bootstrap1.pactus.org:50051",
"bootstrap2.pactus.org:50051",
"bootstrap3.pactus.org:50051",
"bootstrap4.pactus.org:50051",
"pactus-bootstrap.sensifai.com:50051",
"65.108.211.187:50051",
"84.247.165.249:50051",
"62.171.130.196:50051",
"135.181.42.222:50051",
"pactus-bootstrap.stakers.world:50051",
"158.220.91.129:50051",
"pactus-bootstrap.ionode.online:50051",
"159.148.146.149:50051",
"pactus-bootnode.stake.works:50051",
"65.108.68.214:50051",
"65.21.180.80:50051",
"37.27.26.40:50051",
"pactus-bootstrap.mflow.tech:50051",
"5.161.222.154:50051",
"65.21.155.128:50051",
"pactus-bootstrap.teoviteovi.com:50051",
"37.60.233.235:50051",
"43.135.166.197:50051",
"46.250.233.5:50051",
"pactus-bootstrap.codeblocklabs.com:50051",
"84.247.184.95:50051",
"pactus-bootstrap.thd.io.vn:50051",
"37.27.25.245:50051",
"173.212.254.120:50051",
"62.171.134.3:50051",
"109.123.240.5:50051",
"75.119.159.113:50051",
"161.97.145.100:50051",
"109.123.253.88:50051",
"167.86.80.169:50051",
"65.21.69.53:50051",
"pactus-bootstrap.validator.wiki:50051",
"157.90.22.45:50051",
"65.21.152.153:50051",
"128.140.103.90:50051",
"185.216.75.142:50051",
"85.190.246.251:50051",
"45.134.226.48:50051",
"154.38.187.40:50051",
"194.163.164.10:50051",
"167.86.116.61:50051",
"46.196.214.35:50051",
"42.117.247.180:50051",
"38.242.225.172:50051",
"167.86.121.11:50051",
"194.163.189.255:50051",
"37.60.245.47:50051",
"84.247.172.23:50051",
"95.216.222.135:50051",
"65.109.143.78:50051",
"116.203.238.65:50051",
"49.13.27.37:50051",
"37.60.241.89:50051",
"195.201.112.164:50051",
"94.72.120.50:50051",
"95.111.242.225:50051",
"pactus-bootstrap1.dezh.tech:50051",
"5.75.183.248:50051",
"node1.javad.dev:50051",
"pactus1.guanqian.me:50051",
"pactus2.guanqian.me:50051",
"95.217.89.202:50051",
"144.76.70.103:50051",
"185.233.107.64:50051",
"193.22.146.101:50051",
"188.245.82.138:50051",
"178.63.196.133:50051",
"188.245.64.103:50051",
"37.120.173.43:48844",
"103.156.0.148:50051",
"1.53.252.54:50051",
"103.35.64.107:50051",
"116.203.238.65:50051",
"103.35.64.107:50051",
"135.181.255.172:50051",
"103.42.117.229:50051",
"65.108.211.187:50051",
"49.13.202.219:50051",
"116.202.24.60:50051",
"188.121.116.247:50051",
]
def main() -> None:
for node in public_node:
try:
channel = grpc.insecure_channel(node)
stub = NetworkStub(channel)
req = GetNodeInfoRequest()
res = stub.GetNodeInfo(req, timeout=3)
print(f"Reachability status of {node} is {res.reachability}")
except Exception as e:
# print(f"Error: {e}")
pass
if __name__ == "__main__":
main() |
Reachability is not consistent information, and for some unknown reason, some public nodes with public IPs are shown as private [Example here].
Any idea why this happens? Probably, this may have some effects on other modules, which I believe should be taken care of.
The text was updated successfully, but these errors were encountered: