-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limit policy does not take into account changes in the config when using Redis #667
Comments
If it's acceptable to reset rate limits when the gateway restarts, we can add timestamp or a random string to the key in the initialization. |
@y-tabata I think it will depend a lot on the use case. For limits with small periods (seconds, minutes) it might be acceptable. However, 3scale supports big periods for limits (like a month, or a year). In that case, resetting the counters and the limiters is not acceptable. |
Regarding So we need to focus on window:
count:
Regarding |
@y-tabata you are thinking that the key is going to be known without the request. That is not going to be the case all the time. We plan to add templating to the key, so you can for example rate limit by I think it is first important to define what we want - business logic wise - and then proceed to the technical solution. "connection"https://github.com/openresty/lua-resty-limit-traffic/blob/ef080731b80fe00a385a4aed8a7f7f575d01c152/lib/resty/limit/count.lua#L47 uses decrements to maintain the connection limit. That means on change of the limit the current window would first have to expire. I understand this might be confusing and probably counter effective. Proposal: use increments instead of decrements to maintain current connection count and apply the limit after that. "leaky bucket"This limiter is configured with per second rate limit and burst. I think it is ok to ignore this as its super low granularity would cause it to reload instantly. "fixed window"https://github.com/openresty/lua-resty-limit-traffic/blob/ef080731b80fe00a385a4aed8a7f7f575d01c152/lib/resty/limit/count.lua#L47 - it is implemented the same as connection rate limiter. Decrements from the limit default value. Switching to maintaining current value and let the limit outside would solve the problem. SummaryI think we should investigate cost of changing to increment model and see if we can push those changes upstream. If not, we could simply fork the library. Also, there is already some discussion about changing limits on the fly: openresty/lua-resty-limit-traffic#23 |
@mikz Thank you for your advice.
If we use client IP address, the scope will be client. It's good templating when we will implement the
The link is "fixed window"'s link. I think "connection" is the increment model.
In my understanding, if a user changes parameters of a policy (from UI), apicast restarts and the One thing we have to consider is when a user changes parameters of the "fixed window" limiter, we should reset the The issue (openresty/lua-resty-limit-traffic#23) looks not active. |
Client IP is just an example. With templating the value can be any parameter from the request. That might include for example parsed JWT token. We don't want to extend the policy to add scope for each kind people might want to rate limit by. Service and global scope still make sense. For example in our SaaS we won't allow people using global scope. So even if they will have templated rate limiting key We want to allow users to rate limit by anything available in the context (any nginx variable, parsed JWT, etc.) without us doing the work.
You are right. Sorry for the confusion. Indeed the "connection" is incrementing:
Unfortunately no. We also have to consider cases where there is several gateways running with different configurations (either rolling updates or just because of restarts or pulling config at different times).
So if rate limit is changed from 50 to 100 it does not make sense that the app could do more than 100 when that happens. If we would keep only "remaining" and reset it this could happen, right? I think the only sane way of making changes/upgrades work is to track the current usage and just apply the limit independently instead of tracking the remaining.
I think that is hard problem. For example if there is 10 gateways and they start pulling configuration at different times, how would you know that the change was applied or not ? What about just gateway restarts? Keeping state is sync is hard problem. And from my experience it is easier to flip the problem on its head and try to eliminate it rather than fight it. I'll try to revive openresty/lua-resty-limit-traffic#23 and see if they'd be ok with switching to increment model. IMHO that is the best path forward and would unify how the rate limiters are implemented. |
I understand. So users can rate limit by "client" or "method" and so on.
I understand. So in the
This is very nice benefit, I think.
Thanks for your help. |
Yes, the template expansion would be performed in the rewrite phase. And as for how to template, the plan is to use liquid - the same format 3scale uses on developer portal and for nginx configuration of APIcast. I plan to work on this and have a prototype by the end of this week.
I think we might not be needing that. If both connection and fixed window rate limiters would use increments, then the only other variables are the limit and the time window. I think we should be encoding the time window to the key itself. So for example for UNIX timestamp I'd really try to stick just using increments, so we don't have to solve the hard problem of synchronizing changes across the fleet of APIcast instances. |
I made a simple patch openresty/lua-resty-limit-traffic#34 to change it to increments. I'd appreciate some review scrutiny especially regarding atomicity and race conditions. |
#758 updates APIcast to use that forked version. |
This reverts commit b04fbf2.
Can we close this by #703? |
The rate limit policy has some limitations when configured to use Redis.
For example:
fixed window
limit for a key, with a givencount
andwindow
.window
.window
.or
In order to solve this problems, we need to store more information in Redis and perform some checks every time Apicast boots with the rate limit policy config. For example, the policy needs to check the existing limits and see if it needs to increase the TTL of the limiting keys, decrease it, remove them, etc.
The text was updated successfully, but these errors were encountered: