Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1.1] join the cgroup after the initial setup finished #4439

Open
wants to merge 2 commits into
base: release-1.1
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 7 additions & 4 deletions libcontainer/process_linux.go
Original file line number Diff line number Diff line change
Expand Up @@ -407,6 +407,13 @@ func (p *initProcess) start() (retErr error) {
}
}()

// We should join the cgroup after the initial setup finished,
// but before runc init clone new children processes. (#4427)
err = <-waitInit
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A noob question: Will this have any performace impact as the join cgroup and init are now not parallel?

Copy link
Member Author

@lifubang lifubang Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I have a test, to start 100 containers:
runc-1.1.15: 3.025s
With this patch: 4.123s

So, we need to do more detailed control between runc init and the main process.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I have a test, to start 100 containers:
runc-1.1.15: 3.025s
With this patch: 4.123s

^^ Is this degradation within the acceptable limits?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@opencontainers/runc-maintainers PTAL

Copy link
Member

@rata rata Oct 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I don't see why it wouldn't be. 10ms more to start a container seems acceptable, I wouldn't be surprised if we have more noise from go GC or other code changes. Am I missing something?

if err != nil {
return err
}

// Do this before syncing with child so that no children can escape the
// cgroup. We don't need to worry about not doing this and not being root
// because we'd be using the rootless cgroup manager in that case.
lifubang marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -421,10 +428,6 @@ func (p *initProcess) start() (retErr error) {
if _, err := io.Copy(p.messageSockPair.parent, p.bootstrapData); err != nil {
return fmt.Errorf("can't copy bootstrap data to pipe: %w", err)
}
err = <-waitInit
if err != nil {
return err
}

childPid, err := p.getChildPid()
if err != nil {
Expand Down
10 changes: 5 additions & 5 deletions tests/integration/cgroups.bats
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ convert_hugetlb_size() {
"memory.min": "131072",
"memory.low": "524288",
"memory.high": "5242880",
"memory.max": "20484096",
"memory.max": "10485760",
"memory.swap.max": "20971520",
"pids.max": "99",
"cpu.max": "10000 100000",
Expand All @@ -276,15 +276,15 @@ convert_hugetlb_size() {
echo "$output" | grep -q '^memory.min:131072$'
echo "$output" | grep -q '^memory.low:524288$'
echo "$output" | grep -q '^memory.high:5242880$'
echo "$output" | grep -q '^memory.max:20484096$'
echo "$output" | grep -q '^memory.max:10485760$'
echo "$output" | grep -q '^memory.swap.max:20971520$'
echo "$output" | grep -q '^pids.max:99$'
echo "$output" | grep -q '^cpu.max:10000 100000$'

check_systemd_value "MemoryMin" 131072
check_systemd_value "MemoryLow" 524288
check_systemd_value "MemoryHigh" 5242880
check_systemd_value "MemoryMax" 20484096
check_systemd_value "MemoryMax" 10485760
check_systemd_value "MemorySwapMax" 20971520
check_systemd_value "TasksMax" 99
check_cpu_quota 10000 100000 "100ms"
Expand All @@ -304,7 +304,7 @@ convert_hugetlb_size() {
}
| .linux.resources.unified |= {
"memory.min": "131072",
"memory.max": "40484864",
"memory.max": "10485760",
"pids.max": "42",
"cpu.max": "5000 50000",
"cpu.weight": "42"
Expand All @@ -319,7 +319,7 @@ convert_hugetlb_size() {

runc exec test_cgroups_unified cat /sys/fs/cgroup/memory.max
[ "$status" -eq 0 ]
[ "$output" = '40484864' ]
[ "$output" = '10485760' ]

runc exec test_cgroups_unified cat /sys/fs/cgroup/pids.max
[ "$status" -eq 0 ]
Expand Down
Loading