-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add a function to get number of logical cpu cores #4879
Comments
Does |
Normally, it fits well. I am still investigating whether it works in docker and subsystem. |
#include <iostream>
#include <thread>
using namespace std;
int main()
{
cout << thread::hardware_concurrency() << endl;
} test by docker run -it --rm --cpus=2 docker-test /bin/bash
clang++ main.cc -o main && ./main
# output: 40 |
Get Number of Logical CPU CoresWays to limit cpu usage by using cgroups
cgroup v1
Firstly, we can obtaining information about control groups by: $ cat /proc/self/cgroup
11:cpuset:/
10:hugetlb:/
9:blkio:/user.slice
8:memory:/user.slice
7:freezer:/
6:net_prio,net_cls:/
5:devices:/user.slice
4:pids:/user.slice
3:cpuacct,cpu:/user.slice
2:perf_event:/
1:name=systemd:/user.slice/user-1013.slice/session-32587.scope The We can obtain the information of $ cat /sys/fs/cgroup/cpuset/{$group_name}/cpuset.cpus
0-2,16 which means CPU 0/1/2/16 are available. The
we only consider CFS here. We can obtain the information of $ cat /sys/fs/cgroup/cpu/{$group_name}/cpu.cfs_period_us
> 100000
$ cat /sys/fs/cgroup/cpu/{$group_name}/cpu.cfs_quota_us
> 200000 which means cgroup cgroup v2
v2 simplifies v1. For $ cat /sys/fs/cgroup/{$group_name`}/cpuset.cpus
0-1,6,8-10 which means CPU 0/1/6/8/9/10 are available. For $ cat /sys/fs/cgroup/{$group_name`}/cpu.max
$MAX $PERIOD which indicates that the group may consume upto There are other limitations like DockerDocker use cgroup to implement resource isolation. There are three ways to limit CPU usage of a container.
Essentially, In docker $ cat /proc/self/cgroup
11:cpuset:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
10:hugetlb:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
9:blkio:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
8:memory:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
7:freezer:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
6:net_prio,net_cls:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
5:devices:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
4:pids:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
3:cpuacct,cpu:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
2:perf_event:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740
1:name=systemd:/docker/5aeb23b2170fd1681448df3729213f4727507b0c49760bd325d885cda1ab4740 which seems we are under a cgroup named $ ls /sys/fs/cgroup/cpu
cgroup.clone_children cgroup.procs cpu.cfs_quota_us cpu.rt_runtime_us cpu.stat cpuacct.usage notify_on_release
cgroup.event_control cpu.cfs_period_us cpu.rt_period_us cpu.shares cpuacct.stat cpuacct.usage_percpu tasks However, there are no cgroup under subsytems. The $ docker run -it --rm --cpus=21 tiflash-llvm-base:amd64 /bin/bash
# cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
2100000
# cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000 $ docker run -it --rm --cpuset-cpus="0-1,4-6" tiflash-llvm-base:amd64 /bin/bash
# cat /sys/fs/cgroup/cpuset/cpuset.cpus
0-1,4-6 |
Feature Request
Is your feature request related to a problem? Please describe:
Now we can only get number of physical cpu cores by
dbms/src/Common/getNumberOfPhysicalCPUCores.cpp
.getNumberOfPhysicalCPUCores()
will return number of logical cpu cores in some cases which is misleading.Describe the feature you'd like:
Add a function to get number of logical cpu cores which support different env like docker/k8s, cgroup subsystem.
Describe alternatives you've considered:
Teachability, Documentation, Adoption, Migration Strategy:
The text was updated successfully, but these errors were encountered: