Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bump flagd memory limit to 175Mi to avoid OOMs #1131

Closed

Conversation

tsloughter
Copy link
Member

I was seeing flagd fail from OOMs while running the demo. I bumped it to what featureflagservice used before it so its at least no more memory than before, but I can play around with the numbers if you want the bare minimum for the limit.

@tsloughter tsloughter requested a review from a team April 10, 2024 12:19
@tsloughter
Copy link
Member Author

Hm, I'm seeing the same for frontendproxy which has a 50Mi limit and frontend which even has a 200Mi limit.

Maybe it is just me. Trying without the load generator and will close this if so.

@puckpuck
Copy link
Contributor

we do want to try to get the bare minimum if possible.

@tsloughter
Copy link
Member Author

@puckpuck ok, I may just close this tho. I always get mixed up when coming back to looking at k8s requests/limits every couple years, hehe. Since it is only setting a limit then the request is the same, making it the least likely type to be OOMkilled. So if I see it being killed until I up its memory request/limit it likely, in this case, means its simply that other pods will be killed because now its % of usage is less than those others when my system is low on memory.

@MadVikingGod
Copy link

I just ran into the bug here.

I would suggest at least 50. It has gotten to steady state at 29M right now, but there might be a spike at startup so having a bit of headroom would be wise.

As for the Request Vs Limit, Request would decide if it can be scheduled, where Limit is when it gets OOM killed. The best advice I've heard is only set Memory limits and CPU requests.

@puckpuck
Copy link
Contributor

Going to close this in favor of #1161

@puckpuck puckpuck closed this Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants