Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start on CRI-O fails since 1.23.1 #12509

Closed
dilyanpalauzov opened this issue Sep 18, 2021 · 8 comments · Fixed by #12533
Closed

minikube start on CRI-O fails since 1.23.1 #12509

dilyanpalauzov opened this issue Sep 18, 2021 · 8 comments · Fixed by #12533
Assignees
Labels
co/kubelet Kubelet config issues co/runtime/crio CRIO related issues kind/regression Categorizes issue or PR as related to a regression from a prior release. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@dilyanpalauzov
Copy link

After upgrading minikube 1.23 → 1.23.1 minikube start does initially download whatever it wants, and then fails to start.

$ minikube config view 
- container-runtime: cri-o
- driver: podman
- 
$ minikube version 
minikube version: v1.23.1
commit: 84d52cd81015effbdd40c632d9de13db91d48d43

$ minikube start
😄  minikube v1.23.1 on Fedora 34                                                                                                                             
✨  Using the podman driver based on user configuration                                                                                                       
👍  Starting control plane node minikube in cluster minikube                                                                                                  
🚜  Pulling base image ...                                                                                                                                    
E0918 13:15:51.538361    4012 cache.go:201] Error downloading kic artifacts:  not yet implemented, see issue #8426                                            
🔥  Creating podman container (CPUs=2, Memory=2200MB) ...                                                                                                     
🎁  Preparing Kubernetes v1.22.1 on CRI-O 1.22.0 ...                                                                                                          
    ▪ Generating certificates and keys ...                                                                                                                    
    ▪ Booting up control plane ...                                                                                                                            
💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init --config /var/tmp/minikube/
kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailabl
e--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-contr
oller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-ip
tables": Process exited with status 1                                                                                                                         
stdout:                                                                                                                                                       
[init] Using Kubernetes version: v1.22.1                                                                                                                      
[preflight] Running pre-flight checks                                                                                                                         
[preflight] Pulling images required for setting up a Kubernetes cluster                                                                                       
[preflight] This might take a minute or two, depending on the speed of your internet connection                                                               
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'                                                                 
[certs] Using certificateDir folder "/var/lib/minikube/certs"                                                                                                 
[certs] Using existing ca certificate authority                                                                                                               
[certs] Using existing apiserver certificate and key on disk                                                                                                  
[certs] Generating "apiserver-kubelet-client" certificate and key                                                                                             
[certs] Generating "front-proxy-ca" certificate and key                                                                                                       
[certs] Generating "front-proxy-client" certificate and key                                                                                                   
[certs] Generating "etcd/ca" certificate and key                                                                                                              
[certs] Generating "etcd/server" certificate and key                                                                                                          
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]                                            
[certs] Generating "etcd/peer" certificate and key                                                                                                            
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]                                              
[certs] Generating "etcd/healthcheck-client" certificate and key                                                                                              
[certs] Generating "apiserver-etcd-client" certificate and key                                                                                                
[certs] Generating "sa" key and public key                                                                                                                    
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"                                                                                               [150/185]
[kubeconfig] Writing "admin.conf" kubeconfig file                                                                                                             
[kubeconfig] Writing "kubelet.conf" kubeconfig file                                                                                                           
[kubeconfig] Writing "controller-manager.conf" kubeconfig file                                                                                                
[kubeconfig] Writing "scheduler.conf" kubeconfig file                                                                                                         
[kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by: 
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:                                                                                                                                              [113/185]
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  -
-ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernet
es-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.y
aml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Proces
s exited with status 1
stdout:
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by: 
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init --config /var/tmp/minikube
/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailab
le--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-cont
roller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-i
ptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by: 
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172

$ minikube status 
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

Compare to the output of minikube-1.23.0 start:

😄  minikube v1.23.0 on Fedora 34
✨  Using the podman driver based on user configuration
🎉  minikube 1.23.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.23.1
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0918 13:35:52.304162   12889 cache.go:200] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=2200MB) ...
🎁  Preparing Kubernetes v1.22.1 on CRI-O 1.20.3 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

❗  /usr/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1.
    ▪ Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
@fsbaraglia
Copy link

I have the same on a MAC using Parallel:
minikube start --cpus 4 --memory 4096 --driver=parallels --container-runtime=cri-o --disk-size=80G
😄 minikube v1.23.1 auf Darwin 11.5.2
✨ Using the parallels driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating parallels VM (CPUs=4, Memory=4096MB, Disk=81920MB) ...
🎁 Vorbereiten von Kubernetes v1.22.1 auf CRI-O 1.22.0...
▪ Generating certificates and keys ...
▪ Booting up control plane ...

@fsbaraglia
Copy link

I tried with container-runtime=containerd or docker, it works fine, so the problem is with cri-o

minikube start --cpus 4 --memory 4096 --driver=parallels --container-runtime=containerd --disk-size=80G
😄 minikube v1.23.1 auf Darwin 11.5.2
✨ Using the parallels driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
💾 Downloading Kubernetes v1.22.1 preload ...
> preloaded-images-k8s-v13-v1...: 929.25 MiB / 929.25 MiB 100.00% 3.39 MiB
🔥 Creating parallels VM (CPUs=4, Memory=4096MB, Disk=81920MB) ...
📦 Vorbereiten von Kubernetes v1.22.1 auf containerd 1.4.9...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@fsbaraglia
Copy link

fsbaraglia commented Sep 18, 2021

fixed adding extra-config

minikube start --cpus 4 --memory 4096 --driver=parallels --container-runtime=cri-o --disk-size=80G --extra-config=kubelet.cgroup-driver=systemd
😄 minikube v1.23.1 auf Darwin 11.5.2
✨ Using the parallels driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating parallels VM (CPUs=4, Memory=4096MB, Disk=81920MB) ...
🎁 Vorbereiten von Kubernetes v1.22.1 auf CRI-O 1.22.0...
▪ kubelet.cgroup-driver=systemd
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

the only difference that I see via minikube ssh is "--cgroup-driver=systemd" into the kubeadm.conf systems config:

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=10.211.55.60 --runtime-request-timeout=15m
# pwd
/var/lib/minikube/binaries/v1.22.1
# ./kubelet --version
Kubernetes v1.22.1
# ./kubelet --help | grep cgroup-driver
--cgroup-driver string                                     Driver that the kubelet uses to manipulate cgroups on the host.  Possible values: 'cgroupfs', 'systemd' (default "cgroupfs") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

the problem is in the kubeadm.yaml file used to bootstrap the minikube cluster/node
follow the kubeadm.yaml on my node
kubeadm.yaml.txt

in the last version 1.22.1 of kubelet, it is deprecated the parameter --cgroup-driver, and as default it is configured to get cgroupfs.

To fix finally the problem, we need to request to insert into the kubeadm.yaml the following configuration:
(as mentioned also here officially https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/ )

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

it is also present in the template https://github.com/kubernetes/minikube/blob/v1.23.1/hack/update/kubernetes_version/templates/v1beta2/crio.yaml
but somehow is getting lost during the deployment

@spowelljr
Copy link
Member

@fsbaraglia Thank you for your investigative work!

@spowelljr spowelljr added co/kubelet Kubelet config issues co/runtime/crio CRIO related issues kind/regression Categorizes issue or PR as related to a regression from a prior release. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Sep 20, 2021
@medyagh medyagh changed the title minikube start fails since 1.23.1 minikube start on CRI-O fails since 1.23.1 Sep 20, 2021
@medyagh
Copy link
Member

medyagh commented Sep 20, 2021

if this is a kubelet config change why does it only affect crio, this seems to be an important bug

@spowelljr spowelljr self-assigned this Sep 20, 2021
@medyagh
Copy link
Member

medyagh commented Sep 20, 2021

for reference here is Docker runtime on Docker driver

docker@p1:~$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf                    
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=p1 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]

here is CRIO on docker driver

docker@p2:~$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=p2 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m


question is, Why Kublet would "Wants" Docker.socket for Crio ?

@spowelljr
Copy link
Member

spowelljr commented Sep 21, 2021

@dilyanpalauzov @fsbaraglia I've discovered the root of the problem.

So in the point release from v1.23.0 -> v1.23.1 I updated cri-o from v1.20 -> v1.22.

Unknown to us, a PR titled "crio config" only prints the fields that are differet than the default was included in the v1.22 release of cri-o. cri-o has the default cgroup manager set to systemd so if the system uses systemd it no longer outputs it in the crio config.

For minikube to know what cgroup to use for minikube, we default to cgroupfs and then parse the output of crio config to determine what to use. The new version of crio was omitting outputting systemd as the cgroup manager since it's the default, we had nothing to parse and were using our default of cgroupfs instead.

@spowelljr
Copy link
Member

@dilyanpalauzov @fsbaraglia This should be fixed with out newest release of minikube v1.23.2

https://github.com/kubernetes/minikube/releases/tag/v1.23.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kubelet Kubelet config issues co/runtime/crio CRIO related issues kind/regression Categorizes issue or PR as related to a regression from a prior release. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants