-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix retry-after
value in distributed case
#34
Conversation
5f8efed
to
9569f77
Compare
Test fails on Ubuntu because it can't find the instance UUID file:
The instance ID is needed to write/read distributed rate limit state, so I will need to find a way to provide an instance ID to a The failure on Windows is a little stranger:
I am thinking this is due to the |
Making this a draft while I work through the problems with this change. |
The basic approach in mholt#30 -- to sync around the timestamp of the oldest event -- is sound, but the implementation was flawed, because the spot before the cursor is not always the oldest event in the ring buffer. We now correctly compute that value while counting events in the window (which we had to do in order to sync event counts anyway). Additionally, this commit adds tests for the distributed rate limiter which simulate a peer by constructing a `ringBufferRateLimiter`, writing it out to storage, and then starting up a `caddytest.Tester` on that storage.
92dbab3
to
42ad060
Compare
- use a semaphore - cache loaded test config additionally, since the Caddy rate limiter will soon provide a useful retry-after (mholt/caddy-ratelimit#34), we now check that in this test.
I fixed the test failures by escaping the path on Windows and creating |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking quite good. Thank you! Is this ready to be merged? How's it working for you?
We've had this change in our integration tests for a week or so now, and deployed to a staging environment since late last week, and it appears to be working as intended. I think it should be OK to merge, especially since the most serious consequence of this being broken is just that clients will get an inaccurate |
Awesome, that's great to hear! Thanks for working on this. It was something that I wasn't sure the best approach but I think this is a good one. For how simple and elegant it is, it gets us a long way. |
The basic approach in #30 -- to sync around the timestamp of the oldest
event -- is sound, but the implementation was flawed, because the spot
before the cursor is not always the oldest event in the ring buffer. We
now correctly compute that value while counting events in the window
(which we had to do in order to sync event counts anyway). Additionally,
this commit adds tests for the distributed rate limiter which simulate a
peer by constructing a
ringBufferRateLimiter
, writing it out tostorage, and then starting up a
caddytest.Tester
on that storage.