Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement a token-bucket pacing algorithm #2615

Merged
merged 5 commits into from
Jun 24, 2020
Merged

Conversation

marten-seemann
Copy link
Member

@marten-seemann marten-seemann commented Jun 19, 2020

Fixes #2606. cc @Stebalien

The bucket accumulates "tokens" (bytes that can be sent out immediately). Every time we send a packet, we check if there are enough tokens in the bucket to send out a (full-size) packet.

Once the tokens in the bucket are depleted, we need to wait for enough new tokens to accumulate, before we can send out another packet. TimeUntilSend can be used to query when this will be the case. To avoid setting too short timers (this would both be computationally inefficient, and timers have a limited resolution anyway), this function will never return a duration smaller than MinPacingDelay (1ms).

When sending out packets, we arm a pacing timer if the bucket runs out of tokens and we're not congestion limited. If we're congestion limited, but still have enough tokens in the bucket, there's no need to arm a timer, as the only way we'll be allowed to send more packets is by receiving an ACK from the peer that frees up the congestion window.

To avoid sending large bursts into the network, the bucket size is limited to maximum of a constant maxBurstSize (10 full-size packets) and the number of packets that we're supposed to send out during MinPacingDelay + TimerGranularity (both 1ms).

As recommended in the recovery draft, the pacer fills the bucket with a slightly higher rate than the actual bandwidth (N = 1.25). This allows us to completely fill the congestion window even when the RTT varies. More importantly, it also means that in the case of a continuous transfer, we become congestion limited shortly before depleting the bucket. As described above, in this case we don't need to arm a pacing timer, as packets are effectively paced by receiving acknowledgements. This greatly reduces the number of times we need to arm the session timer.

In my tests, this PR reduces the number of times we arm a pacing timer by 2/3 on the sending side, and eliminates the setting of pacing timers completely on the receiving side (as it should, there's nothing to pace when you're just acknowledging incoming packets).

@codecov
Copy link

codecov bot commented Jun 19, 2020

Codecov Report

Merging #2615 into master will increase coverage by 0.07%.
The diff coverage is 96.30%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2615      +/-   ##
==========================================
+ Coverage   86.21%   86.28%   +0.07%     
==========================================
  Files         122      123       +1     
  Lines        9775     9787      +12     
==========================================
+ Hits         8427     8444      +17     
+ Misses       1004     1001       -3     
+ Partials      344      342       -2     
Impacted Files Coverage Δ
internal/congestion/cubic_sender.go 91.11% <77.78%> (-1.14%) ⬇️
internal/ackhandler/sent_packet_handler.go 73.04% <100.00%> (-0.35%) ⬇️
internal/congestion/pacer.go 100.00% <100.00%> (ø)
session.go 75.54% <100.00%> (+0.24%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3289d2c...fda00fe. Read the comment docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

use a token bucket algorithm for pacing packet
2 participants