Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: retry delay calculation check #364

Merged
merged 2 commits into from
Nov 15, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
## [unreleased]
### Bug fixes
- [#363](https://github.com/influxdata/influxdb-client-go/pull/363) Generated server stubs return also error message from InfluxDB 1.x forward compatible API.
- [#364](https://github.com/influxdata/influxdb-client-go/pull/364) Fixed panic when retrying over a long period without a server connection.

## 2.12.0 [2022-10-27]
### Features
Expand Down
14 changes: 9 additions & 5 deletions internal/write/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ type Batch struct {
RetryAttempts uint
// true if it was removed from queue
Evicted bool
// time where this batch expires
// time when this batch expires
Expires time.Time
}

Expand Down Expand Up @@ -105,10 +105,10 @@ func (w *Service) SetBatchErrorCallback(cb BatchErrorCallback) {

// HandleWrite handles writes of batches and handles retrying.
// Retrying is triggered by new writes, there is no scheduler.
// It first checks retry queue, cause it has highest priority.
// It first checks retry queue, because it has the highest priority.
// If there are some batches in retry queue, those are written and incoming batch is added to end of retry queue.
// Immediate write is allowed only in case there was success or not retryable error.
// Otherwise delay is checked based on recent batch.
// Otherwise, delay is checked based on recent batch.
// If write of batch fails with retryable error (connection errors and HTTP code >= 429),
// Batch retry time is calculated based on #of attempts.
// If writes continues failing and # of attempts reaches maximum or total retry time reaches maxRetryTime,
Expand Down Expand Up @@ -249,13 +249,17 @@ func isIgnorableError(error *http2.Error) bool {
return false
}

// computeRetryDelay calculates retry delay
// computeRetryDelay calculates retry delay.
// Retry delay is calculated as random value within the interval
// [retry_interval * exponential_base^(attempts) and retry_interval * exponential_base^(attempts+1)]
func (w *Service) computeRetryDelay(attempts uint) uint {
minDelay := int(w.writeOptions.RetryInterval() * pow(w.writeOptions.ExponentialBase(), attempts))
maxDelay := int(w.writeOptions.RetryInterval() * pow(w.writeOptions.ExponentialBase(), attempts+1))
retryDelay := uint(rand.Intn(maxDelay-minDelay) + minDelay)
diff := maxDelay - minDelay
if diff <= 0 { //check overflows
return w.writeOptions.MaxRetryInterval()
}
retryDelay := uint(rand.Intn(diff) + minDelay)
if retryDelay > w.writeOptions.MaxRetryInterval() {
retryDelay = w.writeOptions.MaxRetryInterval()
}
Expand Down
5 changes: 4 additions & 1 deletion internal/write/service_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,10 @@ func TestComputeRetryDelay(t *testing.T) {
assertBetween(t, srv.computeRetryDelay(2), 20_000, 40_000)
assertBetween(t, srv.computeRetryDelay(3), 40_000, 80_000)
assertBetween(t, srv.computeRetryDelay(4), 80_000, 125_000)
assert.EqualValues(t, 125_000, srv.computeRetryDelay(5))

for i := uint(5); i < 200; i++ { //test also limiting higher values
assert.EqualValues(t, 125_000, srv.computeRetryDelay(i))
}
}

func TestErrorCallback(t *testing.T) {
Expand Down