Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set LimitNOFILE to 1048576 instead of infinity #16329

Conversation

dims
Copy link
Member

@dims dims commented Feb 8, 2024

When we set LimitNOFILE to infinity things break .. for example the nfs based tests in kubernetes harness:
https://testgrid.k8s.io/amazon-ec2-al2023#ci-kubernetes-e2e-al2023-aws-canary&width=20

Also see containerd/containerd#9660 which points to more things that are likely to break with infinity containerd/containerd#8924 (comment)

Signed-off-by: Davanum Srinivas <davanum@gmail.com>
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 8, 2024
@k8s-ci-robot k8s-ci-robot requested review from hakman and zetaab February 8, 2024 14:59
@k8s-ci-robot k8s-ci-robot added area/nodeup size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Feb 8, 2024
Copy link
Member

@zetaab zetaab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

infinity should be same as 1048576

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 8, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: zetaab

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 8, 2024
@k8s-ci-robot k8s-ci-robot merged commit cdf65e4 into kubernetes:master Feb 8, 2024
21 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.29 milestone Feb 8, 2024
@dims
Copy link
Member Author

dims commented Feb 8, 2024

@zetaab
Copy link
Member

zetaab commented Feb 8, 2024

I executed prometheus query max(max_over_time(process_open_fds[2w])) to all of our clusters and at least in our case 1048576 is going to be enough. However, I am not 100% sure is this the case in case of all kOps users. Our maximum is currently ~44k

@polarathene
Copy link

However, I am not 100% sure is this the case in case of all kOps users.

You honestly shouldn't need to do this if software affected properly raised their soft limit at runtime.

  • The default soft limit should be 1024 for compatibility. It's per process, so the FD count is not cumulative.
  • A hard limit could be set high and should be ok, it just defines the ceiling for how high a soft limit may be raised.
  • I know some enterprise grade deployments have workloads that exceed 2^20 (1048576) limit, but if software relies on the soft limit being raised for it instead when it needs this much, it's negatively impacting other software in the container that regresses.

I encourage you to identify the software that hits those issues and push for them to fix it on their end. It's a couple lines to raise the soft limit to the max hard limit. It's not a difficult request to upstream. Why projects like Envoy are refusing to do this is beyond me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/nodeup cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants