Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Control the rate of internal messages #50

Closed
angt opened this issue Oct 21, 2019 · 13 comments
Closed

Control the rate of internal messages #50

angt opened this issue Oct 21, 2019 · 13 comments
Milestone

Comments

@angt
Copy link
Owner

angt commented Oct 21, 2019

Actually set to max(100, 2*(rtt+rttvar))ms.
This is related to #49.

@angt angt added this to the v0.2.3 milestone Oct 21, 2019
@SntRkt
Copy link

SntRkt commented Oct 23, 2019

Could a static rate of "ping" or "keep-alive" messages be defined in the path syntax? It would be great to be able to configure different paths at different ping rates regardless of whether traffic is flowing or not. Then use those pings to calculate accurate rtt, pdv, and loss stats for 5 sec, 30 sec, and 1 min, etc intervals.

I've noticed that an idle tunnel doesn't detect path failure. Path fail-over takes up to 12 seconds when sending 1 pps. If I send 4 pps I see very fast fail-over. My current workaround is to send 4 pps down the tunnel indefinitely. For my purposes it works great.

@angt
Copy link
Owner Author

angt commented Oct 23, 2019

Hi,
Yes on a unused path, failure detection is not done. But when the link is used, the failover is fast. It's like a passive healthcheck that allows you to use you own monitoring system over glorytun.
You can look at what we use on overthebox to do the monitoring part: https://github.com/ovh/overthebox-feeds/blob/master/otb-tracker/bin

@user747
Copy link

user747 commented Nov 23, 2019

My current workaround is to send 4 pps down the tunnel indefinitely.

Interesting. So you have each path doing 'ping -i 0.25' to your tunnel ?

My tracker settings are not very aggressive, so I will occasionally see layer 7 buffering when a path starting experiences issues. This sort of has the unattended effect of alerting me when I need to further troubleshoot a path.

@SntRkt
Copy link

SntRkt commented Nov 23, 2019

Interesting. So you have each path doing 'ping -i 0.25' to your tunnel ?

I'm not using a tracker... I didn't realize that's what people were doing until angt explained it. I send UDP packets at a rate of 4/sec from the client to the server through the tunnel. Glorytun seems to send (passive) control packets down all paths at a rate equal to the data rate (up to some point). I rely on Glorytun mechanisms to detect and disable failed paths. My goal was to send enough traffic down the tunnel that fail-over happens sub-second.

Keep in mind, my goal is simply fail-over. I don't care much about load balancing at this point because my application is VoIP. I need consistent latency and jitter with quick fail-over. angt has plans to bring back the "backup" path functionality, so that should solve my problem.

The idea behind my request was that Glorytun could use active checks at regular intervals and internally collect the path performance metrics in circular buffers. Then we could refer to those metrics when using external user space utilities to dynamically change path rates, backup status, etc. It would be handy to be able to correlate path rates with performance metrics and adjust path bandwidth parameters on the fly or move a path to/from the backup state on the fly. For my fail-over purposes a tracking-type application could be used to do this with almost any VPN or tunnel. A routing protocol with BFD would be sufficient.

@user747
Copy link

user747 commented Nov 25, 2019

I send UDP packets at a rate of 4/sec from the client to the server through the tunnel.

Sorry if a bit off topic, was just curious regarding this. Seems like many ways to do this after I searched a bit:

while true; do echo "" > /dev/udp/tunnelip/portnumber; sleep 0.25; done

@SntRkt
Copy link

SntRkt commented Dec 9, 2019

I wrote a small C program that detects all tun interfaces configured as point-to-point and sends UDP packets at user specified intervals to the PTP remote IP address. I run it on the client. On the server side, I do not reply to those messages because I only need steady traffic to make the tunnel fail-over faster and keep the NAT state. I'm sure there are many other ways, but I needed something very efficient as my devices don't have much power.

@Marctraider
Copy link

Interesting. So you have each path doing 'ping -i 0.25' to your tunnel ?

I'm not using a tracker... I didn't realize that's what people were doing until angt explained it. I send UDP packets at a rate of 4/sec from the client to the server through the tunnel. Glorytun seems to send (passive) control packets down all paths at a rate equal to the data rate (up to some point). I rely on Glorytun mechanisms to detect and disable failed paths. My goal was to send enough traffic down the tunnel that fail-over happens sub-second.

Keep in mind, my goal is simply fail-over. I don't care much about load balancing at this point because my application is VoIP. I need consistent latency and jitter with quick fail-over. angt has plans to bring back the "backup" path functionality, so that should solve my problem.

The idea behind my request was that Glorytun could use active checks at regular intervals and internally collect the path performance metrics in circular buffers. Then we could refer to those metrics when using external user space utilities to dynamically change path rates, backup status, etc. It would be handy to be able to correlate path rates with performance metrics and adjust path bandwidth parameters on the fly or move a path to/from the backup state on the fly. For my fail-over purposes a tracking-type application could be used to do this with almost any VPN or tunnel. A routing protocol with BFD would be sufficient.

If your main concern is reliability you should check out engarde + wireguard.

@angt
Copy link
Owner Author

angt commented Jan 2, 2020

Thanks, I didn't know about engarde.
It looks interesting, do you use it ?

@angt angt modified the milestones: v0.2.3, v0.3.0 Jan 2, 2020
@Marctraider
Copy link

Marctraider commented Jan 2, 2020

Thanks, I didn't know about engarde.
It looks interesting, do you use it ?

Yes!

I started with OpenMPTCPRouter, then Glorytun, tried MLVPN, but none of these solutions gave me seamless failover when a WAN connection goes bad.

Engarde is probably the best thing that I've ever used in terms of seamless redundancy. I can add synthetic packet loss or delay to two of the three WAN connections and I can still play games or use VOIP without a single hitch, until all 3 lines go bad.

The program is not very known, took me months to figure out it even existed trying to find solutions to get a fully redundant connection going through multiple ISPs. But maybe its also because it uses Wireguard and its relatively 'new' or At least still experimental.

The program is not very efficient though in terms of throughput (Seems to be a cpu usage issue), and basically I have to QoS/SQM the tunnel itself, applying sqm to individual WAN interfaces somehow doesn't work properly with this tool.

So I have to be really conservative (40~Mbps atm) while my fastest connection can in theory reach 200Mbps.

There's hoping someday that a tool like this exists that can not only do maximum redundancy but also (or dynamically) aggregate connections so that there is best of both worlds!

Maybe glorytun in the future, who knows!

@angt
Copy link
Owner Author

angt commented Jan 3, 2020

Nice! This is definitely the end goal of glorytun :)

@angt
Copy link
Owner Author

angt commented Jan 11, 2020

The keepalive option has been merged in master (default to 25s).
There is also a new beat option in the command path to control the internal rate (default to 100ms) as requested.

@angt angt closed this as completed Jan 11, 2020
@ghost
Copy link

ghost commented Jun 23, 2020

I've went back from Engarde to Glorytun UDP with udpspeeder, ironically I get 0 packet loss with ~5% decrease in aggregated speed, didn't see a benefit with Engarde (no aggregation) with udpspeeder.

I need to occasionally ping from one of the interfaces (ping -I interface) to restore the connection on a certain uplink behind double NAT, happens every few days, I'm going to try keepalive with 1 second. Currently using a ping script, could it possible be my ISP?

@SriramScorp
Copy link

Hi @angt ! How to tunnel glorytun via udpspeeder ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants