Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support passing of Ulimits as -1 to mean max #19879

Merged
merged 1 commit into from
Nov 10, 2023

Conversation

rhatdan
Copy link
Member

@rhatdan rhatdan commented Sep 6, 2023

Docker allows the passing of -1 to indicate the maximum limit allowed for the current process.

Fixes: #19319

Does this PR introduce a user-facing change?

Ulimits set to -1 sets the container processes limits to the current max (Unlimited)

@openshift-ci openshift-ci bot added release-note approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Sep 6, 2023
@rhatdan rhatdan force-pushed the ulimits branch 2 times, most recently from 5a1bad0 to 1035f36 Compare September 7, 2023 13:45
@mheon
Copy link
Member

mheon commented Sep 7, 2023

@Luap99 Does this risk the kernel-change break that we saw earlier?

@TomSweeneyRedHat
Copy link
Member

@rhatdan it looks like you have changes to label functionality in here too, was that intentional? If so, a note in the description would be appreciated.

Lots of red tests here too, do we have something going on?

@rhatdan
Copy link
Member Author

rhatdan commented Sep 8, 2023

I updated the Buildah Vendor. I will split them apart, since I have to take another pass.

@TomSweeneyRedHat
Copy link
Member

@rhatdan looks like you have a dirty tree with that vendor....

Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we cannot ever freeze limits container creation. This will reintroduce the same bug. You need to set the limit at container start time, see #18721

But than again why would we set anything if unlimited is requested as rootless if we treat as as much as we can? In this case we do not have to do anything as the limits will just naturally be inherited by the child processes.

If unlimited should really mean max then it should only do so as rootless. As root we can set unlimited, otherwise we may break backwards and docker compatibility.

I find it rather awkward to just silently assume unlimted == max you can get. If someone explicitly request unlimited they may want it for a reason. Changing the definition of what unlimited means for resource limits seems like a bad idea and IMO breaks POLA.

Comment on lines 491 to 492
Expect(ulimit.Soft).To(BeNumerically("==", limit.Cur))
Expect(ulimit.Hard).To(BeNumerically("==", limit.Max))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a major regression, unlimited must mean unlimited. Everything else just breaks user assumption.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlimited does not mean unlimited in rootful, since everything has a limit. The reason for this option for users is to allow them to up the current without effecting the max. Eliminating this ability means that, this functionality is not available to the users.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's discuss at tomorrows standup.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 16, 2023
@TomSweeneyRedHat
Copy link
Member

@rhatdan getting closer. Now you have branch conflicts.

@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 12, 2023
@packit-as-a-service
Copy link

Cockpit tests failed for commit 17aecd248e958380fe99cbb38b7cec8cb697bcfc. @martinpitt, @jelly, @mvollmer please check.

@rhatdan rhatdan force-pushed the ulimits branch 4 times, most recently from 9edf85e to c9308eb Compare October 18, 2023 22:10
@TomSweeneyRedHat
Copy link
Member

@rhatdan any further luck with this? Lot's of unhappy tests still.

@rhatdan
Copy link
Member Author

rhatdan commented Oct 25, 2023

Hope to get back to it soon.

@Luap99
Copy link
Member

Luap99 commented Oct 26, 2023

Yes we cannot ever freeze limits container creation. This will reintroduce the same bug. You need to set the limit at container start time, see #18721

Please take a look at this comment, we cannot ever freeze limits like this at create time without causing troubles down the road.

@rhatdan
Copy link
Member Author

rhatdan commented Oct 27, 2023

I disagree, in the previous error we were doing this without user input. Now the user must specify the -1 field which is documented to set the "current" max to the container. We make no guarantees that this number might change in the future.

Besides, the Kernel change could and should have been considered a breaking change, since it broke stuff, by changing defaults.

Currently rootless podman is broken when giving it a -1 field, and this can happen in docker-compose. We need to allow this and make sure the users understand what they did.

Perhaps in podman 5.* with boltdb podman update container could even change this field.

@Luap99
Copy link
Member

Luap99 commented Oct 30, 2023

I am not saying you cannot translate -1 to max in the rootless case, but when you do that it must happen at the container start time not the create time like you are doing right now, i.e. it should be in generateSpec() in libpod.

Limits are never static they can change due configuration changes or hardware changes, if I lower my system memory then with this patch the container would no longer start and become useless.

Docker allows the passing of -1 to indicate the maximum limit
allowed for the current process.

Fixes: containers#19319

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@rhatdan
Copy link
Member Author

rhatdan commented Nov 2, 2023

The limits are being stored as -1 now, and should be applied at start time, from my examining of the underlying data structure and from reading podman inspect.

Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, LGTM

Copy link
Contributor

openshift-ci bot commented Nov 2, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Luap99, rhatdan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@TomSweeneyRedHat
Copy link
Member

LGTM
but the boltdb test is barfing, I'm not sure if that's a flake or not.

@rhatdan rhatdan added the lgtm Indicates that a PR is ready to be merged. label Nov 10, 2023
@openshift-merge-bot openshift-merge-bot bot merged commit 7d107b9 into containers:main Nov 10, 2023
100 checks passed
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Feb 9, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 9, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. release-note
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Issue with ulimits memlock -1:-1
5 participants