Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet fails to start - "is not running" or "unhealthy" #9027

Closed
lthistle opened this issue Aug 18, 2020 · 6 comments
Closed

Kubelet fails to start - "is not running" or "unhealthy" #9027

lthistle opened this issue Aug 18, 2020 · 6 comments
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@lthistle
Copy link

Steps to reproduce the issue:

  1. Install kubectl on Ubuntu 18.04 with apt-get according to instructions here.
  2. Install minikube according to instructions here (using KVM).
  3. Run minikube start --driver=docker
  4. Command hangs for several minutes then fails.

Full output of minikube start --driver=docker command:

😄  minikube v1.12.3 on Ubuntu 18.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
initialization failed,will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.4.0-42-generic
DOCKER_VERSION: 19.03.8
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'


stderr:
W0818 20:56:46.449156     753 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0818 20:56:50.690280     753 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0818 20:56:50.691688     753 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

Full output of minikube start --driver=docker --alsologtostderr command:

I0818 16:34:40.628841  354695 out.go:197] Setting JSON to false
I0818 16:34:40.647904  354695 start.go:100] hostinfo: {"hostname":"luke-DS81D","uptime":8878,"bootTime":1597774002,"procs":393,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"5.4.0-42-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"59a79f51-31fa-4a63-b928-3865d0d3f708"}
I0818 16:34:40.648509  354695 start.go:110] virtualization: kvm host
I0818 16:34:40.664276  354695 out.go:105] 😄
I0818 16:34:40.664692  354695 driver.go:287] Setting default libvirt URI to qemu:///system
I0818 16:34:40.664878  354695 notify.go:125] Checking for updates...
I0818 16:34:40.706459  354695 docker.go:87] docker version: linux-19.03.12
I0818 16:34:40.714537  354695 out.go:105] ✨  Using the docker driver based on user configuration
I0818 16:34:40.714576  354695 start.go:232] selected driver: docker
I0818 16:34:40.714587  354695 start.go:638] validating driver "docker" against <nil>
I0818 16:34:40.714604  354695 start.go:649] status for docker: {Installed:true Healthy:true NeedsImprovement:false Error:<nil> Fix: Doc:}
I0818 16:34:40.714666  354695 cli_runner.go:109] Run: docker system info --format "{{json .}}"
I0818 16:34:40.786146  354695 start_flags.go:222] no existing cluster config was found, will generate one from the flags 
I0818 16:34:40.786177  354695 start_flags.go:240] Using suggested 3900MB memory alloc based on sys=15902MB, container=15902MB
I0818 16:34:40.786295  354695 start_flags.go:613] Wait components to verify : map[apiserver:true system_pods:true]
I0818 16:34:40.786318  354695 cni.go:74] Creating CNI manager for ""
I0818 16:34:40.786325  354695 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0818 16:34:40.786335  354695 start_flags.go:344] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I0818 16:34:40.793741  354695 out.go:105] 👍
I0818 16:34:40.831723  354695 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 in local docker daemon, skipping pull
I0818 16:34:40.831745  354695 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 exists in daemon, skipping pull
I0818 16:34:40.831753  354695 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0818 16:34:40.831784  354695 preload.go:105] Found local preload: /home/luke/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I0818 16:34:40.831791  354695 cache.go:51] Caching tarball of preloaded images
I0818 16:34:40.831811  354695 preload.go:131] Found /home/luke/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0818 16:34:40.831818  354695 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0818 16:34:40.832072  354695 profile.go:150] Saving config to /home/luke/.minikube/profiles/minikube/config.json ...
I0818 16:34:40.832097  354695 lock.go:35] WriteFile acquiring /home/luke/.minikube/profiles/minikube/config.json: {Name:mka8e8157f614e7faa5576ee8400c181f8aaf0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0818 16:34:40.832284  354695 cache.go:181] Successfully downloaded all kic artifacts
I0818 16:34:40.832307  354695 start.go:244] acquiring machines lock for minikube: {Name:mkf4cc6d97a8a4a9e9d62738b36ad0978a88f608 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0818 16:34:40.832352  354695 start.go:248] acquired machines lock for "minikube" in 32.647µs
I0818 16:34:40.832367  354695 start.go:85] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}
I0818 16:34:40.832411  354695 start.go:122] createHost starting for "" (driver="docker")
I0818 16:34:40.839851  354695 out.go:105] 🔥
I0818 16:34:40.840046  354695 start.go:158] libmachine.API.Create for "minikube" (driver="docker")
I0818 16:34:40.840073  354695 client.go:164] LocalClient.Create starting
I0818 16:34:40.840119  354695 main.go:115] libmachine: Reading certificate data from /home/luke/.minikube/certs/ca.pem
I0818 16:34:40.840151  354695 main.go:115] libmachine: Decoding PEM data...
I0818 16:34:40.840167  354695 main.go:115] libmachine: Parsing certificate...
I0818 16:34:40.840268  354695 main.go:115] libmachine: Reading certificate data from /home/luke/.minikube/certs/cert.pem
I0818 16:34:40.840290  354695 main.go:115] libmachine: Decoding PEM data...
I0818 16:34:40.840303  354695 main.go:115] libmachine: Parsing certificate...
I0818 16:34:40.840586  354695 cli_runner.go:109] Run: docker ps -a --format {{.Names}}
I0818 16:34:40.878402  354695 cli_runner.go:109] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0818 16:34:40.919664  354695 oci.go:101] Successfully created a docker volume minikube
I0818 16:34:40.919743  354695 cli_runner.go:109] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib
I0818 16:34:41.671760  354695 oci.go:105] Successfully prepared a docker volume minikube
W0818 16:34:41.671811  354695 oci.go:165] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0818 16:34:41.672010  354695 cli_runner.go:109] Run: docker info --format "'{{json .SecurityOptions}}'"
I0818 16:34:41.671865  354695 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0818 16:34:41.672077  354695 preload.go:105] Found local preload: /home/luke/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I0818 16:34:41.672089  354695 kic.go:133] Starting extracting preloaded images to volume ...
I0818 16:34:41.672149  354695 cli_runner.go:109] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/luke/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -I lz4 -xvf /preloaded.tar -C /extractDir
I0818 16:34:41.750238  354695 cli_runner.go:109] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=3900mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0
I0818 16:34:42.258732  354695 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Running}}
I0818 16:34:42.300733  354695 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0818 16:34:42.344785  354695 cli_runner.go:109] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0818 16:34:42.474731  354695 oci.go:222] the created container "minikube" has a running status.
I0818 16:34:42.474756  354695 kic.go:157] Creating ssh key for kic: /home/luke/.minikube/machines/minikube/id_rsa...
I0818 16:34:42.635731  354695 kic_runner.go:179] docker (temp): /home/luke/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0818 16:34:42.797901  354695 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0818 16:34:42.842226  354695 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0818 16:34:42.842269  354695 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0818 16:34:47.129267  354695 cli_runner.go:151] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/luke/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -I lz4 -xvf /preloaded.tar -C /extractDir: (5.457071424s)
I0818 16:34:47.129296  354695 kic.go:138] duration metric: took 5.457206 seconds to extract preloaded images to volume
I0818 16:34:47.129377  354695 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0818 16:34:47.167929  354695 machine.go:88] provisioning docker machine ...
I0818 16:34:47.167961  354695 ubuntu.go:166] provisioning hostname "minikube"
I0818 16:34:47.168012  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:47.202905  354695 main.go:115] libmachine: Using SSH client type: native
I0818 16:34:47.203089  354695 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0818 16:34:47.203105  354695 main.go:115] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0818 16:34:47.378038  354695 main.go:115] libmachine: SSH cmd err, output: <nil>: minikube

I0818 16:34:47.378175  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:47.425638  354695 main.go:115] libmachine: Using SSH client type: native
I0818 16:34:47.425802  354695 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0818 16:34:47.425820  354695 main.go:115] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else 
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                        fi
                fi
I0818 16:34:47.542862  354695 main.go:115] libmachine: SSH cmd err, output: <nil>: 
I0818 16:34:47.542923  354695 ubuntu.go:172] set auth options {CertDir:/home/luke/.minikube CaCertPath:/home/luke/.minikube/certs/ca.pem CaPrivateKeyPath:/home/luke/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/luke/.minikube/machines/server.pem ServerKeyPath:/home/luke/.minikube/machines/server-key.pem ClientKeyPath:/home/luke/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/luke/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/luke/.minikube}
I0818 16:34:47.542977  354695 ubuntu.go:174] setting up certificates
I0818 16:34:47.542997  354695 provision.go:82] configureAuth start
I0818 16:34:47.543101  354695 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0818 16:34:47.582465  354695 provision.go:131] copyHostCerts
I0818 16:34:47.582522  354695 exec_runner.go:91] found /home/luke/.minikube/ca.pem, removing ...
I0818 16:34:47.582591  354695 exec_runner.go:98] cp: /home/luke/.minikube/certs/ca.pem --> /home/luke/.minikube/ca.pem (1029 bytes)
I0818 16:34:47.582676  354695 exec_runner.go:91] found /home/luke/.minikube/cert.pem, removing ...
I0818 16:34:47.582712  354695 exec_runner.go:98] cp: /home/luke/.minikube/certs/cert.pem --> /home/luke/.minikube/cert.pem (1070 bytes)
I0818 16:34:47.582773  354695 exec_runner.go:91] found /home/luke/.minikube/key.pem, removing ...
I0818 16:34:47.582803  354695 exec_runner.go:98] cp: /home/luke/.minikube/certs/key.pem --> /home/luke/.minikube/key.pem (1675 bytes)
I0818 16:34:47.582854  354695 provision.go:105] generating server cert: /home/luke/.minikube/machines/server.pem ca-key=/home/luke/.minikube/certs/ca.pem private-key=/home/luke/.minikube/certs/ca-key.pem org=luke.minikube san=[172.17.0.3 localhost 127.0.0.1 minikube minikube]
I0818 16:34:47.812905  354695 provision.go:159] copyRemoteCerts
I0818 16:34:47.812993  354695 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0818 16:34:47.813045  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:47.850019  354695 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/luke/.minikube/machines/minikube/id_rsa Username:docker}
I0818 16:34:47.958620  354695 ssh_runner.go:215] scp /home/luke/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1029 bytes)
I0818 16:34:48.048655  354695 ssh_runner.go:215] scp /home/luke/.minikube/machines/server.pem --> /etc/docker/server.pem (1139 bytes)
I0818 16:34:48.136112  354695 ssh_runner.go:215] scp /home/luke/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0818 16:34:48.224331  354695 provision.go:85] duration metric: configureAuth took 681.316509ms
I0818 16:34:48.224367  354695 ubuntu.go:190] setting minikube options for container-runtime
I0818 16:34:48.224635  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:48.264360  354695 main.go:115] libmachine: Using SSH client type: native
I0818 16:34:48.264551  354695 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0818 16:34:48.264571  354695 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0818 16:34:48.384638  354695 main.go:115] libmachine: SSH cmd err, output: <nil>: overlay

I0818 16:34:48.384675  354695 ubuntu.go:71] root file system type: overlay
I0818 16:34:48.384886  354695 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0818 16:34:48.384971  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:48.424178  354695 main.go:115] libmachine: Using SSH client type: native
I0818 16:34:48.424328  354695 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0818 16:34:48.424416  354695 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0818 16:34:48.598510  354695 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0818 16:34:48.598632  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:48.638925  354695 main.go:115] libmachine: Using SSH client type: native
I0818 16:34:48.639129  354695 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0818 16:34:48.639168  354695 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0818 16:34:49.348441  354695 main.go:115] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service       2020-03-10 19:42:48.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-08-18 20:34:48.591308120 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0818 16:34:49.348507  354695 machine.go:91] provisioned docker machine in 2.180558071s
I0818 16:34:49.348528  354695 client.go:167] LocalClient.Create took 8.508447461s
I0818 16:34:49.348546  354695 start.go:166] duration metric: libmachine.API.Create for "minikube" took 8.508500773s
I0818 16:34:49.348559  354695 start.go:207] post-start starting for "minikube" (driver="docker")
I0818 16:34:49.348568  354695 start.go:217] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0818 16:34:49.348620  354695 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0818 16:34:49.348663  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:49.382928  354695 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/luke/.minikube/machines/minikube/id_rsa Username:docker}
I0818 16:34:49.497938  354695 ssh_runner.go:148] Run: cat /etc/os-release
I0818 16:34:49.502679  354695 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0818 16:34:49.502706  354695 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0818 16:34:49.502719  354695 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0818 16:34:49.502729  354695 info.go:99] Remote host: Ubuntu 20.04 LTS
I0818 16:34:49.502742  354695 filesync.go:118] Scanning /home/luke/.minikube/addons for local assets ...
I0818 16:34:49.502784  354695 filesync.go:118] Scanning /home/luke/.minikube/files for local assets ...
I0818 16:34:49.502814  354695 start.go:210] post-start completed in 154.244633ms
I0818 16:34:49.503128  354695 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0818 16:34:49.538359  354695 profile.go:150] Saving config to /home/luke/.minikube/profiles/minikube/config.json ...
I0818 16:34:49.538635  354695 start.go:125] duration metric: createHost completed in 8.706214356s
I0818 16:34:49.538647  354695 start.go:76] releasing machines lock for "minikube", held for 8.706284968s
I0818 16:34:49.538734  354695 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0818 16:34:49.574176  354695 ssh_runner.go:148] Run: systemctl --version
I0818 16:34:49.574220  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:49.574228  354695 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0818 16:34:49.574276  354695 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0818 16:34:49.610426  354695 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/luke/.minikube/machines/minikube/id_rsa Username:docker}
I0818 16:34:49.610500  354695 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/luke/.minikube/machines/minikube/id_rsa Username:docker}
I0818 16:34:49.768262  354695 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0818 16:34:49.802037  354695 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0818 16:34:49.831973  354695 cruntime.go:194] skipping containerd shutdown because we are bound to it
I0818 16:34:49.832039  354695 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0818 16:34:49.862380  354695 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0818 16:34:49.891224  354695 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0818 16:34:49.984174  354695 ssh_runner.go:148] Run: sudo systemctl start docker
I0818 16:34:50.009017  354695 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I0818 16:34:50.064698  354695 out.go:105] 🐳
I0818 16:34:50.064807  354695 cli_runner.go:109] Run: docker network ls --filter name=bridge --format {{.ID}}
I0818 16:34:50.101833  354695 cli_runner.go:109] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" c68702c42c19
I0818 16:34:50.137037  354695 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
I0818 16:34:50.137113  354695 ssh_runner.go:148] Run: grep 172.17.0.1   host.minikube.internal$ /etc/hosts
I0818 16:34:50.139794  354695 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1  host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0818 16:34:50.165282  354695 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0818 16:34:50.165320  354695 preload.go:105] Found local preload: /home/luke/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I0818 16:34:50.165382  354695 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0818 16:34:50.204445  354695 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v2
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0

-- /stdout --
I0818 16:34:50.204468  354695 docker.go:319] Images already preloaded, skipping extraction
I0818 16:34:50.204507  354695 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0818 16:34:50.239522  354695 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v2
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0

-- /stdout --
I0818 16:34:50.239556  354695 cache_images.go:74] Images are preloaded, skipping loading
I0818 16:34:50.239600  354695 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0818 16:34:50.280389  354695 cni.go:74] Creating CNI manager for ""
I0818 16:34:50.280406  354695 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0818 16:34:50.280416  354695 kubeadm.go:84] Using pod CIDR: 
I0818 16:34:50.280431  354695 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0818 16:34:50.280521  354695 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
controllerManager:
  extraArgs:
    "leader-elect": "false"
scheduler:
  extraArgs:
    "leader-elect": "false"
kubernetesVersion: v1.18.3
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 172.17.0.3:10249

I0818 16:34:50.280635  354695 kubeadm.go:796] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3

[Install]
 config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0818 16:34:50.280698  354695 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0818 16:34:50.309110  354695 binaries.go:43] Found k8s binaries, skipping transfer
I0818 16:34:50.309173  354695 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0818 16:34:50.343344  354695 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
I0818 16:34:50.419351  354695 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0818 16:34:50.505899  354695 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1730 bytes)
I0818 16:34:50.595733  354695 ssh_runner.go:148] Run: grep 172.17.0.3   control-plane.minikube.internal$ /etc/hosts
I0818 16:34:50.599745  354695 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0818 16:34:50.627490  354695 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0818 16:34:50.726017  354695 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0818 16:34:50.773078  354695 certs.go:52] Setting up /home/luke/.minikube/profiles/minikube for IP: 172.17.0.3
I0818 16:34:50.773122  354695 certs.go:169] skipping minikubeCA CA generation: /home/luke/.minikube/ca.key
I0818 16:34:50.773135  354695 certs.go:169] skipping proxyClientCA CA generation: /home/luke/.minikube/proxy-client-ca.key
I0818 16:34:50.773170  354695 certs.go:273] generating minikube-user signed cert: /home/luke/.minikube/profiles/minikube/client.key
I0818 16:34:50.773175  354695 crypto.go:69] Generating cert /home/luke/.minikube/profiles/minikube/client.crt with IP's: []
I0818 16:34:50.868635  354695 crypto.go:157] Writing cert to /home/luke/.minikube/profiles/minikube/client.crt ...
I0818 16:34:50.868666  354695 lock.go:35] WriteFile acquiring /home/luke/.minikube/profiles/minikube/client.crt: {Name:mkc5ee3e8bcaa4ede2c3485c5cdbbff617104d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0818 16:34:50.868861  354695 crypto.go:165] Writing key to /home/luke/.minikube/profiles/minikube/client.key ...
I0818 16:34:50.868880  354695 lock.go:35] WriteFile acquiring /home/luke/.minikube/profiles/minikube/client.key: {Name:mkf81597a7c80b13399b95cdbcad6a2de3e2cdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0818 16:34:50.868999  354695 certs.go:273] generating minikube signed cert: /home/luke/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0818 16:34:50.869006  354695 crypto.go:69] Generating cert /home/luke/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0818 16:34:51.156579  354695 crypto.go:157] Writing cert to /home/luke/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 ...
I0818 16:34:51.156610  354695 lock.go:35] WriteFile acquiring /home/luke/.minikube/profiles/minikube/apiserver.crt.0f3e66d0: {Name:mk750f2bb1b842911caf6e3ad426f813ee70a2f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0818 16:34:51.156821  354695 crypto.go:165] Writing key to /home/luke/.minikube/profiles/minikube/apiserver.key.0f3e66d0 ...
I0818 16:34:51.156840  354695 lock.go:35] WriteFile acquiring /home/luke/.minikube/profiles/minikube/apiserver.key.0f3e66d0: {Name:mkf6c3a7fb278c6dfabfe06aacb24990e1ff1f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0818 16:34:51.156955  354695 certs.go:284] copying /home/luke/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 -> /home/luke/.minikube/profiles/minikube/apiserver.crt
I0818 16:34:51.157048  354695 certs.go:288] copying /home/luke/.minikube/profiles/minikube/apiserver.key.0f3e66d0 -> /home/luke/.minikube/profiles/minikube/apiserver.key
I0818 16:34:51.157114  354695 certs.go:273] generating aggregator signed cert: /home/luke/.minikube/profiles/minikube/proxy-client.key
I0818 16:34:51.157120  354695 crypto.go:69] Generating cert /home/luke/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0818 16:34:51.303578  354695 crypto.go:157] Writing cert to /home/luke/.minikube/profiles/minikube/proxy-client.crt ...
I0818 16:34:51.303608  354695 lock.go:35] WriteFile acquiring /home/luke/.minikube/profiles/minikube/proxy-client.crt: {Name:mkc2d2a3ae365b642b7d694d824fe2b5626e2056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0818 16:34:51.303813  354695 crypto.go:165] Writing key to /home/luke/.minikube/profiles/minikube/proxy-client.key ...
I0818 16:34:51.303832  354695 lock.go:35] WriteFile acquiring /home/luke/.minikube/profiles/minikube/proxy-client.key: {Name:mk3fd3dbfdaeb8f53fc2ce4cd48dc15ef0fcc0ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0818 16:34:51.304047  354695 certs.go:348] found cert: /home/luke/.minikube/certs/home/luke/.minikube/certs/ca-key.pem (1679 bytes)
I0818 16:34:51.304109  354695 certs.go:348] found cert: /home/luke/.minikube/certs/home/luke/.minikube/certs/ca.pem (1029 bytes)
I0818 16:34:51.304155  354695 certs.go:348] found cert: /home/luke/.minikube/certs/home/luke/.minikube/certs/cert.pem (1070 bytes)
I0818 16:34:51.304192  354695 certs.go:348] found cert: /home/luke/.minikube/certs/home/luke/.minikube/certs/key.pem (1675 bytes)
I0818 16:34:51.304830  354695 ssh_runner.go:215] scp /home/luke/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0818 16:34:51.385765  354695 ssh_runner.go:215] scp /home/luke/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0818 16:34:51.476941  354695 ssh_runner.go:215] scp /home/luke/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0818 16:34:51.559055  354695 ssh_runner.go:215] scp /home/luke/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0818 16:34:51.639261  354695 ssh_runner.go:215] scp /home/luke/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0818 16:34:51.730568  354695 ssh_runner.go:215] scp /home/luke/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0818 16:34:51.817313  354695 ssh_runner.go:215] scp /home/luke/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0818 16:34:51.905500  354695 ssh_runner.go:215] scp /home/luke/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0818 16:34:51.998177  354695 ssh_runner.go:215] scp /home/luke/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0818 16:34:52.084830  354695 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0818 16:34:52.164722  354695 ssh_runner.go:148] Run: openssl version
I0818 16:34:52.174298  354695 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0818 16:34:52.200958  354695 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0818 16:34:52.205923  354695 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Aug 18 19:54 /usr/share/ca-certificates/minikubeCA.pem
I0818 16:34:52.205994  354695 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0818 16:34:52.213382  354695 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0818 16:34:52.241155  354695 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I0818 16:34:52.241276  354695 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0818 16:34:52.280334  354695 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0818 16:34:52.308721  354695 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0818 16:34:52.341238  354695 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0818 16:34:52.341330  354695 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0818 16:34:52.369018  354695 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0818 16:34:52.369057  354695 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0818 16:39:16.991071  354695 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m24.621982876s)
W0818 16:39:16.991209  354695 out.go:151] 💥s=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.4.0-42-generic
DOCKER_VERSION: 19.03.8
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'


stderr:
W0818 20:34:52.428836     758 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0818 20:34:55.480820     758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0818 20:34:55.482262     758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

When I try to start minikube with docker as the driver, the command hangs on "Preparing Kubernetes v1.18.3 on Docker 19.03.8 ..." for several minutes then subsequently fails. It appears that there's an issue with the kubelet failing to start. As the error message advises, when I troubleshoot with systemctl status kubelet, the output is "Unit kubelet.service could not be found." Does anyone know how I can fix this?

Issue #5451 appears to be similar, however minikube delete followed by minikube start again does not resolve my issue.

@lthistle
Copy link
Author

Also, here's the output of minikube logs:

==> Docker <==
-- Logs begin at Tue 2020-08-18 21:26:54 UTC, end at Tue 2020-08-18 21:27:21 UTC. --
Aug 18 21:26:55 minikube systemd[1]: Starting Docker Application Container Engine...
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.169535748Z" level=info msg="Starting up"
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.173681940Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.173710371Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.173728072Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.173740364Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.176558075Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.176845762Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.176873064Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Aug 18 21:26:55 minikube dockerd[157]: time="2020-08-18T21:26:55.176883927Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.383483485Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.465490424Z" level=warning msg="Your kernel does not support swap memory limit"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.465516493Z" level=warning msg="Your kernel does not support cgroup rt period"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.465521926Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.465526487Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.465530999Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.465661264Z" level=info msg="Loading containers: start."
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.569059471Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.625269631Z" level=info msg="Loading containers: done."
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.664457646Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.664604108Z" level=info msg="Daemon has completed initialization"
Aug 18 21:26:59 minikube dockerd[157]: time="2020-08-18T21:26:59.712427789Z" level=info msg="API listen on /run/docker.sock"
Aug 18 21:26:59 minikube systemd[1]: Started Docker Application Container Engine.
Aug 18 21:27:00 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Aug 18 21:27:01 minikube systemd[1]: Stopping Docker Application Container Engine...
Aug 18 21:27:01 minikube dockerd[157]: time="2020-08-18T21:27:01.022448612Z" level=info msg="Processing signal 'terminated'"
Aug 18 21:27:01 minikube dockerd[157]: time="2020-08-18T21:27:01.024026649Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Aug 18 21:27:01 minikube dockerd[157]: time="2020-08-18T21:27:01.024954181Z" level=info msg="Daemon shutdown complete"
Aug 18 21:27:01 minikube systemd[1]: docker.service: Succeeded.
Aug 18 21:27:01 minikube systemd[1]: Stopped Docker Application Container Engine.
Aug 18 21:27:01 minikube systemd[1]: Starting Docker Application Container Engine...
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.080790510Z" level=info msg="Starting up"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.083979892Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.084087039Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.084148434Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.084209104Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.086528782Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.086548763Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.086563817Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.086577358Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.098941353Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.115563006Z" level=warning msg="Your kernel does not support swap memory limit"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.115596372Z" level=warning msg="Your kernel does not support cgroup rt period"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.115606187Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.115614866Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.115623398Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.115823359Z" level=info msg="Loading containers: start."
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.226617191Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.281113623Z" level=info msg="Loading containers: done."
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.317305361Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.317387022Z" level=info msg="Daemon has completed initialization"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.338709008Z" level=info msg="API listen on /var/run/docker.sock"
Aug 18 21:27:01 minikube dockerd[375]: time="2020-08-18T21:27:01.338774083Z" level=info msg="API listen on [::]:2376"
Aug 18 21:27:01 minikube systemd[1]: Started Docker Application Container Engine.

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
99aa7b18ce567       303ce5db0e90d       7 seconds ago       Running             etcd                      0                   f70a8534a1d4f
b560795b72060       7e28efa976bd1       7 seconds ago       Running             kube-apiserver            0                   02a8fd478dcba
e167cb64ec05c       da26705ccb4b5       7 seconds ago       Running             kube-controller-manager   0                   49b44d333844f
bb5f3d5e06cab       76216c34ed0c7       7 seconds ago       Running             kube-scheduler            0                   06beceb8aa705

==> describe nodes <==
No resources found in default namespace.

==> dmesg <==
[Aug18 18:06] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[  +0.999136] platform eisa.0: EISA: Cannot allocate resource for mainboard
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 1
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 2
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 3
[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 4
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 7
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
[  +0.123671] r8169 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control
[  +0.013690] r8169 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[  +0.322080] ata2.00: LPM support broken, forcing max_power
[  +0.000126] ata2.00: supports DRM functions and may not be fully accessible
[  +0.000054] ata2.00: READ LOG DMA EXT failed, trying PIO
[  +0.019425] ata2.00: LPM support broken, forcing max_power
[  +0.000134] ata2.00: supports DRM functions and may not be fully accessible
[  +1.962113] ACPI Warning: SystemIO range 0x0000000000001828-0x000000000000182F conflicts with OpRegion 0x0000000000001800-0x000000000000187F (\PMIO) (20190816/utaddress-213)
[  +0.000016] ACPI Warning: SystemIO range 0x0000000000001C40-0x0000000000001C4F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20190816/utaddress-213)
[  +0.000003] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20190816/utaddress-213)
[  +0.000001] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20190816/utaddress-213)
[  +0.000003] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20190816/utaddress-213)
[  +0.000001] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20190816/utaddress-213)
[  +0.000003] lpc_ich: Resource conflict(s) found affecting gpio_ich
[  +1.596994] Started bpfilter
[Aug18 18:14] kauditd_printk_skb: 24 callbacks suppressed
[Aug18 18:22] kauditd_printk_skb: 4 callbacks suppressed
[Aug18 18:26] printk: systemd: 47 output lines suppressed due to ratelimiting
[Aug18 18:30] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Aug18 19:57] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.

==> etcd [99aa7b18ce56] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-08-18 21:27:14.879669 I | etcdmain: etcd Version: 3.4.3
2020-08-18 21:27:14.879700 I | etcdmain: Git SHA: 3cf2f69b5
2020-08-18 21:27:14.879702 I | etcdmain: Go Version: go1.12.12
2020-08-18 21:27:14.879705 I | etcdmain: Go OS/Arch: linux/amd64
2020-08-18 21:27:14.879708 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-08-18 21:27:14.879769 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-08-18 21:27:14.880468 I | embed: name = minikube
2020-08-18 21:27:14.880478 I | embed: data dir = /var/lib/minikube/etcd
2020-08-18 21:27:14.880482 I | embed: member dir = /var/lib/minikube/etcd/member
2020-08-18 21:27:14.880484 I | embed: heartbeat = 100ms
2020-08-18 21:27:14.880487 I | embed: election = 1000ms
2020-08-18 21:27:14.880489 I | embed: snapshot count = 10000
2020-08-18 21:27:14.880499 I | embed: advertise client URLs = https://172.17.0.3:2379
2020-08-18 21:27:14.966848 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
raft2020/08/18 21:27:14 INFO: b273bc7741bcb020 switched to configuration voters=()
raft2020/08/18 21:27:14 INFO: b273bc7741bcb020 became follower at term 0
raft2020/08/18 21:27:14 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/08/18 21:27:14 INFO: b273bc7741bcb020 became follower at term 1
raft2020/08/18 21:27:14 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-08-18 21:27:15.040648 W | auth: simple token is not cryptographically signed
2020-08-18 21:27:15.058545 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-08-18 21:27:15.060215 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-08-18 21:27:15.060922 I | embed: listening for metrics on http://127.0.0.1:2381
2020-08-18 21:27:15.061040 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-08-18 21:27:15.061108 I | embed: listening for peers on 172.17.0.3:2380
raft2020/08/18 21:27:15 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-08-18 21:27:15.061422 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
raft2020/08/18 21:27:15 INFO: b273bc7741bcb020 is starting a new election at term 1
raft2020/08/18 21:27:15 INFO: b273bc7741bcb020 became candidate at term 2
raft2020/08/18 21:27:15 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
raft2020/08/18 21:27:15 INFO: b273bc7741bcb020 became leader at term 2
raft2020/08/18 21:27:15 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2020-08-18 21:27:15.877260 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2020-08-18 21:27:15.877367 I | embed: ready to serve client requests
2020-08-18 21:27:15.877574 I | etcdserver: setting up the initial cluster version to 3.4
2020-08-18 21:27:15.877709 I | embed: ready to serve client requests
2020-08-18 21:27:15.878427 I | embed: serving client requests on 172.17.0.3:2379
2020-08-18 21:27:15.878664 I | embed: serving client requests on 127.0.0.1:2379
2020-08-18 21:27:15.881941 N | etcdserver/membership: set the initial cluster version to 3.4
2020-08-18 21:27:15.882110 I | etcdserver/api: enabled capabilities for version 3.4

==> kernel <==
 21:27:23 up  3:20,  0 users,  load average: 1.91, 1.48, 1.17
Linux minikube 5.4.0-42-generic #46~18.04.1-Ubuntu SMP Fri Jul 10 07:21:24 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04 LTS"

==> kube-apiserver [b560795b7206] <==
I0818 21:27:16.585627       1 client.go:361] parsed scheme: "endpoint"
I0818 21:27:16.585650       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0818 21:27:16.593176       1 client.go:361] parsed scheme: "endpoint"
I0818 21:27:16.593199       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W0818 21:27:16.697301       1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0818 21:27:16.705546       1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0818 21:27:16.715046       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0818 21:27:16.729716       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0818 21:27:16.732524       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0818 21:27:16.746568       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0818 21:27:16.769503       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0818 21:27:16.769530       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0818 21:27:16.776319       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0818 21:27:16.776342       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0818 21:27:16.777778       1 client.go:361] parsed scheme: "endpoint"
I0818 21:27:16.777798       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0818 21:27:16.784143       1 client.go:361] parsed scheme: "endpoint"
I0818 21:27:16.784163       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0818 21:27:18.350016       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0818 21:27:18.350049       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0818 21:27:18.350559       1 secure_serving.go:178] Serving securely on [::]:8443
I0818 21:27:18.350608       1 controller.go:81] Starting OpenAPI AggregationController
I0818 21:27:18.350636       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0818 21:27:18.350667       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0818 21:27:18.353144       1 autoregister_controller.go:141] Starting autoregister controller
I0818 21:27:18.353331       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0818 21:27:18.353665       1 crd_finalizer.go:266] Starting CRDFinalizer
I0818 21:27:18.353808       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0818 21:27:18.353863       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0818 21:27:18.353913       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0818 21:27:18.353978       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0818 21:27:18.355172       1 available_controller.go:387] Starting AvailableConditionController
I0818 21:27:18.355304       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0818 21:27:18.355635       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0818 21:27:18.355644       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0818 21:27:18.355938       1 naming_controller.go:291] Starting NamingConditionController
I0818 21:27:18.357457       1 establishing_controller.go:76] Starting EstablishingController
I0818 21:27:18.357495       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0818 21:27:18.357508       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0818 21:27:18.362096       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: 
I0818 21:27:18.366379       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0818 21:27:18.366445       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0818 21:27:18.371643       1 controller.go:86] Starting OpenAPI controller
I0818 21:27:18.375108       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0818 21:27:18.453511       1 cache.go:39] Caches are synced for autoregister controller
I0818 21:27:18.454099       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0818 21:27:18.454275       1 shared_informer.go:230] Caches are synced for crd-autoregister 
I0818 21:27:18.455447       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0818 21:27:18.455692       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
I0818 21:27:19.350149       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0818 21:27:19.350200       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0818 21:27:19.361344       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0818 21:27:19.365837       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0818 21:27:19.365966       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0818 21:27:20.011371       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0818 21:27:20.065664       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0818 21:27:20.151512       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
I0818 21:27:20.152560       1 controller.go:606] quota admission added evaluator for: endpoints
I0818 21:27:20.158338       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0818 21:27:20.647729       1 controller.go:606] quota admission added evaluator for: serviceaccounts

==> kube-controller-manager [e167cb64ec05] <==
I0818 21:27:21.047581       1 shared_informer.go:223] Waiting for caches to sync for namespace
I0818 21:27:21.295958       1 controllermanager.go:533] Started "job"
I0818 21:27:21.296098       1 job_controller.go:144] Starting job controller
I0818 21:27:21.296120       1 shared_informer.go:223] Waiting for caches to sync for job
I0818 21:27:21.444971       1 controllermanager.go:533] Started "csrsigning"
I0818 21:27:21.445047       1 certificate_controller.go:119] Starting certificate controller "csrsigning"
I0818 21:27:21.445059       1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning
I0818 21:27:21.445084       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
I0818 21:27:21.947808       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0818 21:27:21.947914       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0818 21:27:21.947994       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0818 21:27:21.948066       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0818 21:27:21.948120       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0818 21:27:21.948170       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0818 21:27:21.948227       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0818 21:27:21.948330       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0818 21:27:21.948373       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0818 21:27:21.948415       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0818 21:27:21.948494       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W0818 21:27:21.948528       1 shared_informer.go:461] resyncPeriod 44606493747140 is smaller than resyncCheckPeriod 53492420671671 and the informer has already started. Changing it to 53492420671671
I0818 21:27:21.948601       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0818 21:27:21.948644       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0818 21:27:21.948709       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0818 21:27:21.948742       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0818 21:27:21.948809       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0818 21:27:21.948845       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0818 21:27:21.948877       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0818 21:27:21.948913       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0818 21:27:21.948951       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0818 21:27:21.948999       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0818 21:27:21.949024       1 controllermanager.go:533] Started "resourcequota"
I0818 21:27:21.949306       1 resource_quota_controller.go:272] Starting resource quota controller
I0818 21:27:21.949326       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0818 21:27:21.949353       1 resource_quota_monitor.go:303] QuotaMonitor running
I0818 21:27:22.603262       1 garbagecollector.go:133] Starting garbage collector controller
I0818 21:27:22.603404       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0818 21:27:22.603431       1 graph_builder.go:282] GraphBuilder running
I0818 21:27:22.603540       1 controllermanager.go:533] Started "garbagecollector"
I0818 21:27:22.631198       1 controllermanager.go:533] Started "tokencleaner"
I0818 21:27:22.631238       1 tokencleaner.go:118] Starting token cleaner controller
I0818 21:27:22.631395       1 shared_informer.go:223] Waiting for caches to sync for token_cleaner
I0818 21:27:22.631411       1 shared_informer.go:230] Caches are synced for token_cleaner 
I0818 21:27:22.650061       1 controllermanager.go:533] Started "persistentvolume-expander"
I0818 21:27:22.650327       1 expand_controller.go:319] Starting expand controller
I0818 21:27:22.650337       1 shared_informer.go:223] Waiting for caches to sync for expand
I0818 21:27:22.844193       1 controllermanager.go:533] Started "clusterrole-aggregation"
W0818 21:27:22.844226       1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I0818 21:27:22.844276       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0818 21:27:22.844285       1 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator
I0818 21:27:23.094299       1 controllermanager.go:533] Started "replicationcontroller"
I0818 21:27:23.094366       1 replica_set.go:181] Starting replicationcontroller controller
I0818 21:27:23.094375       1 shared_informer.go:223] Waiting for caches to sync for ReplicationController
I0818 21:27:23.095119       1 request.go:621] Throttling request took 1.047516628s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0818 21:27:23.344788       1 controllermanager.go:533] Started "podgc"
W0818 21:27:23.344824       1 controllermanager.go:525] Skipping "nodeipam"
I0818 21:27:23.344897       1 gc_controller.go:89] Starting GC controller
I0818 21:27:23.344963       1 shared_informer.go:223] Waiting for caches to sync for GC
I0818 21:27:23.594691       1 controllermanager.go:533] Started "pvc-protection"
I0818 21:27:23.599386       1 pvc_protection_controller.go:101] Starting PVC protection controller
I0818 21:27:23.599412       1 shared_informer.go:223] Waiting for caches to sync for PVC protection

==> kube-scheduler [bb5f3d5e06ca] <==
I0818 21:27:14.965022       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0818 21:27:14.965065       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0818 21:27:15.623455       1 serving.go:313] Generated self-signed cert in-memory
W0818 21:27:18.360642       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0818 21:27:18.360781       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0818 21:27:18.360834       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0818 21:27:18.360875       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0818 21:27:18.395412       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0818 21:27:18.395492       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0818 21:27:18.407898       1 authorization.go:47] Authorization is disabled
W0818 21:27:18.407918       1 authentication.go:40] Authentication is disabled
I0818 21:27:18.407930       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0818 21:27:18.409175       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0818 21:27:18.409201       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0818 21:27:18.409627       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0818 21:27:18.409920       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0818 21:27:18.410862       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0818 21:27:18.414728       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0818 21:27:18.414728       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0818 21:27:18.414798       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0818 21:27:18.415016       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0818 21:27:18.414852       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0818 21:27:18.414903       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0818 21:27:18.416941       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0818 21:27:18.417537       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0818 21:27:19.246492       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0818 21:27:19.369871       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0818 21:27:19.443097       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0818 21:27:19.529862       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0818 21:27:19.632133       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0818 21:27:19.647537       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0818 21:27:19.808358       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0818 21:27:22.209493       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Tue 2020-08-18 21:26:54 UTC, end at Tue 2020-08-18 21:27:24 UTC. --
Aug 18 21:27:18 minikube kubelet[1085]: E0818 21:27:18.776064    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:18 minikube kubelet[1085]: E0818 21:27:18.876332    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:18 minikube kubelet[1085]: E0818 21:27:18.976690    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.077004    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.177437    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.277686    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.377823    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.477952    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.578119    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.678258    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.778399    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.878542    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:19 minikube kubelet[1085]: E0818 21:27:19.978721    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.078896    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.179172    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.279526    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.379726    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.479928    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.580313    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.680519    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.780630    1085 kubelet.go:2267] node "minikube" not found
Aug 18 21:27:20 minikube kubelet[1085]: E0818 21:27:20.880786    1085 kubelet.go:2267] node "minikube" not found

@lthistle
Copy link
Author

If you look in the --alsologtostderr output, it appears my /etc/kubernetes folder is missing the files admin.conf, kubelet.conf, controller-manager.conf, and scheduler.conf. Is there a way to generate these files?

@medyagh
Copy link
Member

medyagh commented Aug 31, 2020

@lthistle it seems like your linux does not have this kernel module.

 failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-42-generic\n", err: exit status 1

minikube mounts "-v /lib/modules:/lib/modules:ro" to the container, is it possible that your linux the kernels modules are not living in /lib/modules ?

@medyagh
Copy link
Member

medyagh commented Aug 31, 2020

@lthistle do you mind sharing out put of

$ modprobe configs
$ echo $?

/triage needs-information
/triage support

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Aug 31, 2020
@medyagh
Copy link
Member

medyagh commented Sep 9, 2020

@lthistle I haven't heard from you this seems to be dupe of #8370

let track this bug there for centeralized response

@medyagh medyagh closed this as completed Sep 9, 2020
@royzhao7
Copy link

I faced same issue in windows

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

4 participants