-
Notifications
You must be signed in to change notification settings - Fork 592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set memory limits #921
Set memory limits #921
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this, just one request to add a comment about where the values came from.
I suspect the limit of 1000MiB is much too large. I'll create an issue to determine normal memory usage of these processes.
config/500-controller.yaml
Outdated
@@ -52,6 +52,9 @@ spec: | |||
ports: | |||
- containerPort: 9090 | |||
name: metrics | |||
resources: | |||
limits: | |||
memory: 1000Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment explaining the provenance of this value? Even if it's just dart board
, that gives the future maintainer context about how the value was derived and what would be required to improve on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done - see if my comment is sufficient.
We're seeing OOM issues at the Node level when we don't set the limit on some pods. This will allow the pod to restart gracefully w/o taking down the entire node. I'm not stuck on the specific value so we can pick a different one. See knative/serving#3332 for the serving equivalent. Signed-off-by: Doug Davis <dug@us.ibm.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks @duglin!
One more thing - can you add something like this to your release note block in the PR body?
Add 1GiB memory limit to controller and webhook deployments
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: duglin, grantr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@grantr done - let me know if I didn't do it correctly |
Looks good, thanks! |
* Flags to configure eventing upgrade tests forwarder (ksvc) (knative#3899) * Flags to configure eventing upgrade tests forwarder (ksvc) This change is required to complete knative/operator#252 It enables configuring of wathola forwarder implemented as Knative Service. Configuration is done by using new optional environment variables. * Using kelseyhightower/envconfig after code review * Docs for Eventing upgrade tests config overrides. * Eventing upgrade tests prober fully configurable Conflicts resolved: * test/upgrade/prober/configuration.go * test/upgrade/prober/forwarder.go
We're seeing OOM issues at the Node level when we don't set the limit on some pods. This will allow the pod to restart gracefully w/o taking down the entire node.
I'm not stuck on the specific value so we can pick a different one.
See knative/serving#3332 for the serving equivalent.
Signed-off-by: Doug Davis dug@us.ibm.com
Release Note