Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change jitter implementation #10

Merged
merged 7 commits into from
Jul 21, 2023
Merged

Change jitter implementation #10

merged 7 commits into from
Jul 21, 2023

Conversation

ThomWright
Copy link
Member

@ThomWright ThomWright commented Jul 4, 2023

Summary

This PR fixes two main problems:

  1. Jitter failing to be applied when reaching maximum retry intervals.
  2. Inability to reliably set a maximum total retry duration.

This change replaces the decorrelated jitter variation with three options:

  1. None
  2. Full
  3. Bounded

It also changes how 'maximum retry duration' is implemented. Before, there was no information about how long the task had been retrying, so there was no way to implement a correct 'maximum retry duration'.

This has been reimplemented to require the task's start time when setting a maximum retry duration.

Details

Jitter algorithms

Decorrelated jitter has a major flaw: clamping. Retry intervals can get repeatedly clamped to the maximum allowed duration. This effectively removes any jitter.

The implementation in this library is a variation which exacerbates this flaw.

For comparison, a standard exponential backoff algorithm:

sleep = min(max_duration, min_duration * 2 ** attempt)

With “full jitter”:

sleep = random_between(0, min(max_duration, min_duration * 2 ** attempt))

Whereas “decorrelated jitter” is this:

sleep = min(max_duration, random_between(min_duration, prev_sleep * 3))

Let’s break this down into two parts:

# First: increase the sleep value by multiplying it - can only be 3x max_duration
temp = random_between(min_duration, prev_sleep * 3)

# Second: clamp it
sleep = min(max_duration, temp)

The sleep duration will generally increase every iteration. With this algorithm, when the previous sleep grows as large as max_duration there is only a 1/3 chance of applying jitter. E.g.:

max_duration = 10
min_duration = 1

prev_sleep = 10
sleep = min(10, random_between(1, 10 * 3))

This isn't great. In this case, 2/3 of the time the sleep will be the max_duration.

So, decorrelated jitter is this:

sleep = min(max_duration, random_between(min_duration, prev_sleep * 3))

But instead, what our algorithm does is more like this:

sleep = min(max_duration, (min_duration * base ** attempt) * random_between(0, 3))

Again, broken down:

# Calculate a sleep value, can get unboundedly big!
temp = (min_duration * base ** attempt) * random_between(0, 3)

# Then clamp it
sleep = min(max_duration, temp)

Here, there is no real bound on how high min_duration * exp ** attempt can go. E.g.:

min_duration = 1
max_duration = 900
base = 4

attempt = 10
sleep = min(900, (1 * 4 ** 10) * random_between(0, 3))
sleep = min(900, 1_048_576 * random_between(0, 3))

In this case we have a very small chance of any meaningful jitter: 0.0003% chance by my calculation.

Total retry duration

The current implementation makes a guess at how many attempts it will take to exceed a given total duration. It can't be accurate because of the random jitter, so this is a best guess based on mean jitter.

Propagating state

Again, decorrelated jitter is:

sleep = min(max_duration, random_between(min_duration, prev_sleep * 3))

This relies on the previous sleep value. But the should_retry(&self, n_past_retries: u32) function doesn't have access to the previous sleep value, which is (presumably) why a variation on the algorithm was used.

Most similar retry libraries take &mut self rather than &self to keep track of this state internally.

For this library it might not help much though, because retry state is tracked outside of the policy object. E.g. for retrying AMQP messages, we send the number of attempts in a header. The service receiving the retried message looks at the header to retrieve the state and makes the next retry decision based on that.

Before this change, "number of attempts" was the only state this library supported.

Start times

In order to accurately stop retrying after a given total duration, we need to know how long we've been retrying for. The simplest way to do this is to propagate the task start duration.

The API has been changed to accommodate this, in a way which keeps the RetryPolicy trait as-is.

// Returns an `ExponentialBackoffTimed`
let backoff = ExponentialBackoff::builder()
    .build_with_total_retry_duration(Duration::from_secs(24 * 60 * 60));

let started_at = Utc::now()
    .checked_sub_signed(chrono::Duration::seconds(25 * 60 * 60))
    .unwrap();

backoff
    .for_task_started_at(started_at) // Need to supply a task start time to be able to call `should_retry()`
    .should_retry(0); // RetryDecision::DoNotRetry

To do

  • Better changelog notes

@ThomWright ThomWright requested a review from a team as a code owner July 4, 2023 09:45
Decorrelated jitter has a major flaw: clamping. Retry intervals can get
repeatedly clamped to the maximum allowed duration. This effectively
removes any jitter.

The implementation in this library is a variation which exacerbates this
flaw.

This change replaces this jitter implementation with three options:

1. None
2. Full
3. Bounded

It also changes how 'maxiumum retry duration' is implemented. Before,
there was no information about how long the task had been retrying, so
there was no way to implement a correct 'maximum retry duration'.

This has been reimplemented to require a start time for a task when
setting a maxiumum retry duration.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants