Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement & use RateLimitTransport #145

Merged
merged 2 commits into from
Sep 7, 2018
Merged

Implement & use RateLimitTransport #145

merged 2 commits into from
Sep 7, 2018

Conversation

radeksimko
Copy link
Contributor

As described in attached code comments this PR implements best practices GitHub describes in their official documentation.

This is second of two PRs addressing #5
Closes #5

The upside is that with this patch user should never see rate-limit related errors anymore as we'll always retry accordingly and more importantly avoid hitting the limit.

Downsides include slowdown of CUD operations and generally slower UX when managing large number of resources. Admittedly if the previous/current experience was "rate limit errors" and now is slow, but successful apply/destroy I'd still call that a win. 🤷‍♂️ See some figures below.

We could in theory expose a flag to turn off the sleep logic and/or mutex, but I'm not too keen on doing that pre-emptively without further context, esp. because we're just implementing GitHub's own best practices.

100 repositories test

Current implementation

  • creation: 16 seconds
  • refresh: 4 seconds
  • destroy: 9 seconds

Serializing requests via mutex

  • creation: 147 seconds (9x slower)
  • refresh: 25 seconds (6x slower)
  • destroy: 48 seconds (5x slower)

1sec delay between write operations

  • creation: 352 seconds (22x and 2.4x slower)
  • refresh: 25 seconds (6x slower and equal)
  • destroy: 150 seconds (16x and 3x slower)

@radeksimko radeksimko added the Type: Feature New feature or request label Sep 5, 2018
@ghost ghost added the size/L label Sep 5, 2018
Copy link
Contributor

@appilon appilon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there might have been 1 bug I found and then a few gentle suggestions, however you might inform me every line is intentional at which point I will approve!


// drainBody reads all of b to memory and then returns two equivalent
// ReadClosers yielding the same bytes.
func (rlt *rateLimitTransport) drainBody(b io.ReadCloser) (r1, r2 io.ReadCloser, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you forsee this method needing rlt in scope? Not a huge preference but I would just define this as a helper drainBody as opposed to a method of *rateLimitTransport

rlt.m.Unlock()
}

func (rlt *rateLimitTransport) isWriteMethod(method string) bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this method I feel more strongly to having just as a little helper and not a method

log.Printf("[DEBUG] Sleeping %s between write operations", writeDelay)
time.Sleep(writeDelay)
}
if rlt.isWriteMethod(req.Method) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick suggestion:

rlt.delayNextRequest = rlt.isWriteMethod(req.Method)

}
resp.Body = r1
ghErr := github.CheckResponse(resp)
if err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to check ghErr or reuse defined err

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking further down maybe this error check is not wanted at all? seems like ghErr is handled below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good spot! 🙈


// drainBody reads all of b to memory and then returns two equivalent
// ReadClosers yielding the same bytes.
func (rlt *rateLimitTransport) drainBody(b io.ReadCloser) (r1, r2 io.ReadCloser, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I usually always need a refresher when it comes to golangs io package (my mental model always thinks things are backwards). I have a feeling what you've written is exactly what you needed, but would io.TeeReader be of any use? passing the first reader to checkResponse which will fill the second reader with the content? My apologies if there is some gotcha you already explored.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how TeeReader would be handy without more re-buffering/hacking. 🤔
It seems to do the same thing, but it requires io.Writer as source and io.ReadCloser coming from the response body doesn't comply with that interface.

Maybe I'm missing something obvious?

return nil, err
}
resp.Body = r1
ghErr := github.CheckResponse(resp)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm seeing now that github.CheckResponse accepts the response object, making the use of TeeReader impossible. The first time I read it I thought it accepted just a Reader, in that case TeeReader would have worked nicely

var r2 bytes.Buffer
r1 := io.TeeReader(resp.Body, &r2)
ghErr := github.CheckResponse(r1)
resp.Body = &r2

Also the interfaces might not have aligned, if resp.Body was ReadCloser you would have to wrap it just like you did in drainBody anyways nevermind me... always looking for an excuse to use io builtins 😛

@radeksimko radeksimko merged commit 263b46a into master Sep 7, 2018
@radeksimko radeksimko deleted the f-rate-limits branch September 7, 2018 16:01
@lijok lijok mentioned this pull request Jul 2, 2021
kfcampbell pushed a commit to kfcampbell/terraform-provider-github that referenced this pull request Jul 26, 2022
…imits

Implement & use RateLimitTransport
alex-bourla-form3 added a commit to form3tech-oss/terraform-provider-githubfile that referenced this pull request Sep 1, 2023
* Leveraged existing patterns/code used for the normal terraform-provider-github package which was first implimented in [this PR](integrations/terraform-provider-github#145)
alex-bourla-form3 added a commit to form3tech-oss/terraform-provider-codeowners that referenced this pull request Sep 5, 2023
* Leveraging existing patterns/code used for the normal terraform-provider-github package which was first implimented in [this PR](integrations/terraform-provider-github#145)
* Upgraded GH client to v54 for compatibility
alex-bourla-form3 added a commit to form3tech-oss/terraform-provider-codeowners that referenced this pull request Sep 5, 2023
* Leveraging existing patterns/code used for the normal terraform-provider-github package which was first implimented in [this PR](integrations/terraform-provider-github#145)
* Upgraded GH client to v54 for compatibility
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/L Type: Feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants