Replies: 8 comments 15 replies
-
Any Comments are welcomed! /cc @shizhMSFT @FeynmanZhou |
Beta Was this translation helpful? Give feedback.
-
Hi @souleb, thanks for the design proposal!
|
Beta Was this translation helpful? Give feedback.
-
Regarding go-retryablehttp my proposition is to borrow the design, by having the same methods signature for I think a statement referencing go-retryablehttp in the source code above those methods shall be enough. |
Beta Was this translation helpful? Give feedback.
-
@souleb The go-retryablehttp package looks great for HTTP level retry. Applying the retryable http is also very simple by using retryClient.StandardClient(): retryClient := retryablehttp.NewClient()
retryClient.RetryMax = 10
authClient := &auth.Client{
Client: retryClient.StandardClient()
}
// use `authClient` for registry access I don't think we need to re-invent the wheel in For example, type Repository struct {
// ...
// Retry specifies a backoff strategy. No retry is applied if nil.
Retry func(ctx context.Context) BackOff
}
type BackOff interface {
// Next returns the next backoff duration and whether to stop
Next() (time.Duration, bool)
} or even integrate with cenkalti/backoff, which is in MIT license. For most operations, we can just retry. |
Beta Was this translation helpful? Give feedback.
-
I agree with using go-retryablehttp. It is in the for the application level retry we could have: // given an error, a method i.e. Push or Fetch and the content, this function returns a boolean stating whether we can retry
type IsRetryable func(error ErrorCode, method string, content io.Reader) bool When coupling this with a |
Beta Was this translation helpful? Give feedback.
-
an example based on the discussion so far: |
Beta Was this translation helpful? Give feedback.
-
Retry for ORAS
During the remote access, it is not uncommon to have unstable network connections or temporarily unavailable / faulty cloud servers. Thus a retry pattern can be applied to mitigate this kind of issues. However, if the unexpected incident of the remote server takes a longer time to be mitigated, an opposite circuit breaker pattern should be applied. Since a suitable pattern should be chosen and applied according to the application specific business logic, Considering the scenarios of fetching regular layers and foreign layers, it is better to implement retry at the HTTP level instead of the However, users may not know such packages or they have concerns on the licenses. It might still be good to have a Related Projects
ProposalThis comment proposes a new package The retry logic should be implemented to at the package retry
type Transport struct {
// default to http.DefaultTransport if nil
Base http.RoundTripper
// we can export them in future versions of oras-go with better refactoring
backoff time.Duration
factor int
attempts int
jitter float64
}
var DefaultTransport = &Transport{
backoff: 250 * time.Millisecond,
factor: 2,
attempts: 5,
jitter: 0.1,
}
var DefaultClient = &http.Client{
Transport: DefaultTransport,
}
// NewTransport creates an HTTP Transport with the default retry policy
func NewTransport(base http.RoundTripper) *Transport
// NewClient creates an HTTP client with the default retry policy
func NewClient() *http.Client to be developer friendly. |
Beta Was this translation helpful? Give feedback.
-
I’d like to express support for implementation of this. Project Ratify relies upon ORAS-go for multiple simultaneous GET operations to the registry. ratify-project/ratify#502 |
Beta Was this translation helpful? Give feedback.
-
Hello All,
I would like to open the discussion on design HTTP retries as of #147.
Proposal
Add HTTP Level retry with exponential backoff to the remote package.
Note: Only GET and HEAD requests can be retried. Retrying PUT/PATCH/POST requests is problematic.
For uploading the blob, the POST request is retriable but PATCH and PUT are not.
Note: The proposed API is based on the go-retryablehttp api.
We propose adding 2 new types:
CheckRetry
is called after each request. If it returns false, the Client stopsretrying and returns the response to the caller. If it returns an error, that error
value is returned in lieu of the error from the request. The Client will close any
response body when retrying, but if the retry is aborted it is up to the
CheckRetry callback to properly close any response body before returning.
Backoff
is called after a failing request to determine the amount of time thatshould pass before trying again.
RetryWaitMin
andRetryWaitMax
are the minimum and maximum amount of time to wait for a retry,and are used to determine the backoff time.
RetryMax
is the maximum number of times to retry.Users can implement their own
CheckRetry
andBackoff
functions to customizethe retry behavior.
We propose adding the following default functions to the Client:
The
rateLimitBackoff
function is used when we are rate limited and is implemented usinggolang.org/x/time/rate. The
RateLimit-Limit
header is retrieved from the response header and is used to initialize the rate limit bucket.
The DefaultBackoff function is used when we are not rate limited and the proposal is to port
polly-contrib WaitAndRetry with jitter code.
Design considerations
We should be able to retry all HTTP requests.
The response body should be read and closed before retrying, so that we can reuse the connection.
If we rewind the request body before each retry, we should be able to retry the PUT/PATCH/POST request as well.
A good example is the net/http package rewindBody.
Alternatives
Use the go-retryablehttp and provide
a custom
CheckRetry
andBackoff
functions.Beta Was this translation helpful? Give feedback.
All reactions