-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
workaround cgroup zero memory working set problem #1598
base: master
Are you sure you want to change the base?
Conversation
We observed zero container_memory_working_set_bytes metrics problem on nodes with cgroup (v1). The same problem is not observed on nodes with cgroup v2. For example: $ kubectl get --raw /api/v1/nodes/<cgroup_node>/proxy/metrics/resource container_memory_working_set_bytes{container="logrotate",namespace="...",pod="dev-5cd4cc79d6-s9cll"} 0 1732247555792 $ kubectl get --raw /api/v1/nodes/<cgroup2_node>/proxy/metrics/resource container_memory_working_set_bytes{container="logrotate",namespace="...",pod="dev-5cd4cc79d6-test"} 1.37216e+06 1732247626298 The metrics-server logs: metrics-server-77786dd5c5-w4skb metrics-server I1121 22:02:47.705690 1 decode.go:196] "Failed getting complete container metric" containerName="logrotate" containerMetric={"StartTime":"2024-10-23T13:12:07.815984128Z","Timestamp":"2024-11-21T22:02:41.755Z","CumulativeCpuUsed":12016533431788,"MemoryUsage":0} metrics-server-77786dd5c5-w4skb metrics-server I1121 22:02:47.706713 1 decode.go:104] "Failed getting complete Pod metric" pod=".../dev-5cd4cc79d6-s9cll" On the cgroup v1 node: $ kc exec -it dev-5cd4cc79d6-s9cll -c logrotate -- /bin/sh -c "cat /sys/fs/cgroup/memory/memory.usage_in_bytes; cat /sys/fs/cgroup/memory/memory.stat |grep -w total_inactive_file |cut -d' ' -f2" 212414464 214917120 On the cgroup v2 node: $ kc exec -it dev-5cd4cc79d6-test -c logrotate -- /bin/sh -c "cat /sys/fs/cgroup/memory.current; cat /sys/fs/cgroup/memory.stat |grep -w inactive_file |cut -d' ' -f2" 212344832 210112512 The current logic is, if one of the containers encounters this problem, the whole pod metrics will be dropped. This is an overkill. Because the zero memory working set container usually takes a small % of whole pod resource usage. However without PodMetrics, downstream components e.g. HPA are not able to autoscale the deployment/statefulset, etc., which has larger impact for the system. The patch workarounds the cgroup zero memory working set problem by keeping the PodMetrics unless all the containers in the pod encounter this problem at the same time. Signed-off-by: Zhu, Yi <chuyee@gmail.com>
The committers listed above are authorized under a signed CLA. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: chuyee The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This issue is currently awaiting triage. If metrics-server contributors determine this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Welcome @chuyee! |
Hi @chuyee. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
We observed zero container_memory_working_set_bytes metrics problem on nodes with cgroup (v1). The same problem is not observed on nodes with cgroup v2. For example:
The metrics-server logs:
On the cgroup v1 node:
On the cgroup v2 node:
As we can see, cgroup v1 node might return negative (truncated to zero) working set memory in some scenarios. The current metrics-server logic is, if one of the containers encounters this problem, the whole pod metrics will be discarded. This should be an overkill. Because the zero memory working set container usually takes a small % of whole pod resource usage. However if the whole PodMetrics is dropped, downstream component e.g. HPA is not able to autoscale the deployment/statefulset, etc., which can be considered to be a system degradation.
The patch workarounds the cgroup zero working set memory problem by keeping the PodMetrics unless all the containers in the pod encounter this problem at the same time.
What this PR does / why we need it:
workingSetBytes of 0 doesn't always indicate a terminated process
Which issue(s) this PR fixes :
Fixes #1330