Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

workaround cgroup zero memory working set problem #1598

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

chuyee
Copy link

@chuyee chuyee commented Nov 22, 2024

We observed zero container_memory_working_set_bytes metrics problem on nodes with cgroup (v1). The same problem is not observed on nodes with cgroup v2. For example:

$ kubectl get --raw /api/v1/nodes/<cgroup_node>/proxy/metrics/resource
container_memory_working_set_bytes{container="logrotate", namespace="...", pod="dev-5cd4cc79d6-s9cll"} \
0 1732247555792

$ kubectl get --raw /api/v1/nodes/<cgroup2_node>/proxy/metrics/resource
container_memory_working_set_bytes{container="logrotate", namespace="...",pod="dev-5cd4cc79d6-test"} \
1.37216e+06 1732247626298

The metrics-server logs:

metrics-server-77786dd5c5-w4skb metrics-server I1121 22:02:47.705690       \
1 decode.go:196] "Failed getting complete container metric" containerName="logrotate" \
containerMetric={"StartTime":"2024-10-23T13:12:07.815984128Z","Timestamp":"2024-11-21T22:02:41.755Z", \
"CumulativeCpuUsed":12016533431788,"MemoryUsage":0}
metrics-server-77786dd5c5-w4skb metrics-server I1121 22:02:47.706713       \
1 decode.go:104] "Failed getting complete Pod metric" pod=".../dev-5cd4cc79d6-s9cll"

On the cgroup v1 node:

$ kc exec -it dev-5cd4cc79d6-s9cll -c logrotate -- /bin/sh -c \
"cat /sys/fs/cgroup/memory/memory.usage_in_bytes; \
cat /sys/fs/cgroup/memory/memory.stat |grep -w total_inactive_file |cut -d' ' -f2"
212414464
214917120

On the cgroup v2 node:

$ kc exec -it dev-5cd4cc79d6-test -c logrotate -- /bin/sh -c \
"cat /sys/fs/cgroup/memory.current; \
cat /sys/fs/cgroup/memory.stat |grep -w inactive_file |cut -d' ' -f2"
212344832
210112512

As we can see, cgroup v1 node might return negative (truncated to zero) working set memory in some scenarios. The current metrics-server logic is, if one of the containers encounters this problem, the whole pod metrics will be discarded. This should be an overkill. Because the zero memory working set container usually takes a small % of whole pod resource usage. However if the whole PodMetrics is dropped, downstream component e.g. HPA is not able to autoscale the deployment/statefulset, etc., which can be considered to be a system degradation.

The patch workarounds the cgroup zero working set memory problem by keeping the PodMetrics unless all the containers in the pod encounter this problem at the same time.

What this PR does / why we need it:
workingSetBytes of 0 doesn't always indicate a terminated process

Which issue(s) this PR fixes :
Fixes #1330

We observed zero container_memory_working_set_bytes metrics problem on nodes with cgroup (v1). The same problem is not observed on nodes with cgroup v2. For example:

$ kubectl get --raw /api/v1/nodes/<cgroup_node>/proxy/metrics/resource
container_memory_working_set_bytes{container="logrotate",namespace="...",pod="dev-5cd4cc79d6-s9cll"} 0 1732247555792

$ kubectl get --raw /api/v1/nodes/<cgroup2_node>/proxy/metrics/resource
container_memory_working_set_bytes{container="logrotate",namespace="...",pod="dev-5cd4cc79d6-test"} 1.37216e+06 1732247626298

The metrics-server logs:
metrics-server-77786dd5c5-w4skb metrics-server I1121 22:02:47.705690       1 decode.go:196] "Failed getting complete container metric" containerName="logrotate" containerMetric={"StartTime":"2024-10-23T13:12:07.815984128Z","Timestamp":"2024-11-21T22:02:41.755Z","CumulativeCpuUsed":12016533431788,"MemoryUsage":0}
metrics-server-77786dd5c5-w4skb metrics-server I1121 22:02:47.706713       1 decode.go:104] "Failed getting complete Pod metric" pod=".../dev-5cd4cc79d6-s9cll"

On the cgroup v1 node:
$ kc exec -it dev-5cd4cc79d6-s9cll -c logrotate -- /bin/sh -c "cat /sys/fs/cgroup/memory/memory.usage_in_bytes; cat /sys/fs/cgroup/memory/memory.stat |grep -w total_inactive_file |cut -d' ' -f2"
212414464
214917120

On the cgroup v2 node:
$ kc exec -it dev-5cd4cc79d6-test -c logrotate -- /bin/sh -c "cat /sys/fs/cgroup/memory.current; cat /sys/fs/cgroup/memory.stat |grep -w inactive_file |cut -d' ' -f2"
212344832
210112512

The current logic is, if one of the containers encounters this problem, the whole pod metrics will be dropped. This is an overkill. Because the zero memory working set container usually takes a small % of whole pod resource usage. However without PodMetrics, downstream components e.g. HPA are not able to autoscale the deployment/statefulset, etc., which has larger impact for the system.

The patch workarounds the cgroup zero memory working set problem by keeping the PodMetrics unless all the containers in the pod encounter this problem at the same time.

Signed-off-by: Zhu, Yi <chuyee@gmail.com>
Copy link

linux-foundation-easycla bot commented Nov 22, 2024

CLA Signed


The committers listed above are authorized under a signed CLA.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: chuyee
Once this PR has been reviewed and has the lgtm label, please assign logicalhan for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Nov 22, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If metrics-server contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 22, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @chuyee!

It looks like this is your first PR to kubernetes-sigs/metrics-server 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/metrics-server has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Nov 22, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @chuyee. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Is workingSetBytes of 0 really an indication of a terminated process?
2 participants