Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: kubelet failed to start -> apiserver process never appeared #5451

Closed
undsoul opened this issue Sep 24, 2019 · 26 comments
Closed

none: kubelet failed to start -> apiserver process never appeared #5451

undsoul opened this issue Sep 24, 2019 · 26 comments
Labels
co/none-driver kind/support Categorizes issue or PR as a support question.

Comments

@undsoul
Copy link

undsoul commented Sep 24, 2019

minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
🀹  Running on localhost (CPUs=2, Memory=11001MB, Disk=51192MB) ...
ℹ️   OS release is Ubuntu 18.04.3 LTS
🐳  Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
    β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
πŸ’Ύ  Downloading kubelet v1.16.0
πŸ’Ύ  Downloading kubeadm v1.16.0
🚜  Pulling images ...
πŸš€  Launching Kubernetes ... 

πŸ’£  Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
 output: [init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
root@qlikadmin-VirtualBox:~# kubectl cluster-info dump
The connection to the server 10.0.2.15:8443 was refused - did you specify the right host or port?

then:

root@qlikadmin-VirtualBox:~# sudo minikube start --vm-driver=none --kubernetes-version=v1.15.00
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’£  Unable to parse "v1.15.00": Patch number must not contain leading zeroes "00"
root@qlikadmin-VirtualBox:~# minikube start --vm-driver=none --kubernetes-version=v1.15.0
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’£  Error: You have selected Kubernetes v1.15.0, but the existing cluster for your profile is running Kubernetes v1.16.0. Non-destructive downgrades are not supported, but you can proceed by performing one of the following options:

* Recreate the cluster using Kubernetes v1.15.0: Run "minikube delete ", then "minikube start  --kubernetes-version=1.15.0"
* Create a second cluster with Kubernetes v1.15.0: Run "minikube start -p <new name> --kubernetes-version=1.15.0"
* Reuse the existing cluster with Kubernetes v1.16.0 or newer: Run "minikube start  --kubernetes-version=1.16.0"
root@qlikadmin-VirtualBox:~# sudo minikube start --vm-driver=none --kubernetes-version=v1.15
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’£  Unable to parse "v1.15": No Major.Minor.Patch elements found
root@qlikadmin-VirtualBox:~# minikube start --vm-driver=none
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸ”„  Starting existing none VM for "minikube" ...
βŒ›  Waiting for the host to be provisioned ...
🐳  Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
    β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
πŸ”„  Relaunching Kubernetes using kubeadm ... 

πŸ’£  Error restarting cluster: waiting for apiserver: apiserver process never appeared

@medyagh
Copy link
Member

medyagh commented Sep 24, 2019

Thanks for taking the time to create this issue,

so there are two errors, the first one I suspect be a networking issue or maybe too low resource VM that causes the health check to time out, I am curious how big is your VM Ram or CPU numbers?

second error: is because downgrading minikube to an older version is not supported

I am curious have you already tried the suggestion in the error message ?
if not could you please try again with minikube delete and then try
minikube start --vm-driver=none --alsologtostderr -v=8

@medyagh medyagh added the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 24, 2019
@tstromberg
Copy link
Contributor

minikube logs might help. It basically seems like the apiserver never launched, was never visible to the process for security reasons, or was crash-looping so fast that it was never detected.

@tstromberg tstromberg added the kind/support Categorizes issue or PR as a support question. label Sep 24, 2019
@tstromberg tstromberg changed the title Error starting cluster: cmd failed none: kubelet failed to start -> apiserver process never appeared Sep 24, 2019
@BaerSy
Copy link

BaerSy commented Oct 1, 2019

I also got the same problem. My env is Ubuntu 16.04. I didn't use VM, when I started it

minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost

The results:
minikube v1.4.0 on Ubuntu 16.04

  • Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
  • Using the running none "minikube" VM ...
  • Waiting for the host to be provisioned ...
  • Preparing Kubernetes v1.16.0 on Docker 18.09.7 ...
  • Relaunching Kubernetes using kubeadm ...

X Error restarting cluster: waiting for apiserver: apiserver process never appeared
*

Look for any help...

@tstromberg
Copy link
Contributor

tstromberg commented Oct 1, 2019

@BobOntario - Can you add the output of minikube logs?

Also, do you happen to see an apiserver process running? (sudo pgrep apiserver)?

@BaerSy
Copy link

BaerSy commented Oct 1, 2019

@tstromberg no apiserver running? how can i get it started?

@tstromberg
Copy link
Contributor

@BobOntario - That's the failure, basically. kubeadm ran, but the apiserver didn't start. Here's what would help now:

  • The output of: minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --alsologtostderr -v=8
  • The output of: minikube logs

Does minikube also crash on this host if the --apiserver-ips and --apiserver-name flags are omitted?

@BaerSy
Copy link

BaerSy commented Oct 1, 2019

Logs as below. minikube also crash without --apiserver-ips and --apiserver-name flags:

root@ecs-de1e-0004:~# minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --alsologtostderr -v=8
W1002 05:08:26.856768 9944 root.go:239] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
I1002 05:08:26.857019 9944 notify.go:125] Checking for updates...
I1002 05:08:27.358525 9944 start.go:236] hostinfo: {"hostname":"ecs-de1e-0004","uptime":758406,"bootTime":1569205701,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"16.04","kernelVersion":"4.4.0-151-generic","virtualizationSystem":"","virtualizationRole":"","hostid":"2b6963f8-8da7-4a4c-98f5-24ade4faebd1"}
I1002 05:08:27.358915 9944 start.go:246] virtualization:

  • minikube v1.4.0 on Ubuntu 16.04
    I1002 05:08:27.359512 9944 profile.go:66] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I1002 05:08:27.359569 9944 lock.go:41] attempting to write to file "/root/.minikube/profiles/minikube/config.json.tmp122847953" with filemode -rw-------
    I1002 05:08:27.360071 9944 cluster.go:98] Skipping create...Using existing machine configuration
  • Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
    I1002 05:08:27.360242 9944 none.go:257] checking for running kubelet ...
    I1002 05:08:27.360251 9944 exec_runner.go:40] Run: systemctl is-active --quiet service kubelet
    I1002 05:08:27.363183 9944 cluster.go:110] Machine state: Running
  • Using the running none "minikube" VM ...
    I1002 05:08:27.363257 9944 cluster.go:128] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com}
  • Waiting for the host to be provisioned ...
    I1002 05:08:27.363294 9944 cluster.go:149] configureHost: &{BaseDriver:0xc000135400 CommonDriver: URL:tcp://192.168.1.26:2376 runtime:0xc0009e3660 exec:0x2e05c10}
    I1002 05:08:27.363329 9944 cluster.go:168] none is a local driver, skipping auth/time setup
    I1002 05:08:27.363343 9944 cluster.go:151] configureHost completed within 48.926Β΅s
    I1002 05:08:27.363649 9944 profile.go:66] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I1002 05:08:27.363691 9944 lock.go:41] attempting to write to file "/root/.minikube/profiles/minikube/config.json.tmp483942908" with filemode -rw-------
    I1002 05:08:27.363829 9944 exec_runner.go:40] Run: sudo systemctl start docker
    I1002 05:08:27.369756 9944 exec_runner.go:51] Run with output: docker version --format '{{.Server.Version}}'
  • Preparing Kubernetes v1.16.0 on Docker 18.09.7 ...
    I1002 05:08:27.398943 9944 settings.go:124] acquiring lock: {Name:kubeconfigUpdate Clock:{} Delay:10s Timeout:0s Cancel:}
    I1002 05:08:27.399015 9944 settings.go:132] Updating kubeconfig: /root/.kube/config
    I1002 05:08:27.400574 9944 lock.go:41] attempting to write to file "/root/.kube/config" with filemode -rw-------
    I1002 05:08:27.401309 9944 kubeadm.go:610] kubelet v1.16.0 config:
    [Unit]
    Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests

[Install]
I1002 05:08:27.401339 9944 exec_runner.go:40] Run: pgrep kubelet && sudo systemctl stop kubelet
I1002 05:08:27.418852 9944 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm
I1002 05:08:27.418856 9944 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet
I1002 05:08:27.419027 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/v1.16.0/kubelet -> /var/lib/minikube/binaries/v1.16.0/kubelet
I1002 05:08:27.418880 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/v1.16.0/kubeadm -> /var/lib/minikube/binaries/v1.16.0/kubeadm
I1002 05:08:27.516686 9944 exec_runner.go:40] Run: sudo systemctl daemon-reload && sudo systemctl start kubelet
I1002 05:08:27.639271 9944 certs.go:71] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:}
I1002 05:08:27.639374 9944 certs.go:79] Setting up /root/.minikube for IP: 192.168.1.26
I1002 05:08:27.639415 9944 crypto.go:69] Generating cert /root/.minikube/client.crt with IP's: []
I1002 05:08:27.641548 9944 crypto.go:157] Writing cert to /root/.minikube/client.crt ...
I1002 05:08:27.641572 9944 lock.go:41] attempting to write to file "/root/.minikube/client.crt" with filemode -rw-r--r--
I1002 05:08:27.641694 9944 crypto.go:165] Writing key to /root/.minikube/client.key ...
I1002 05:08:27.641711 9944 lock.go:41] attempting to write to file "/root/.minikube/client.key" with filemode -rw-------
I1002 05:08:27.641776 9944 crypto.go:69] Generating cert /root/.minikube/apiserver.crt with IP's: [127.0.0.1 192.168.1.26 10.96.0.1 10.0.0.1]
I1002 05:08:27.643901 9944 crypto.go:157] Writing cert to /root/.minikube/apiserver.crt ...
I1002 05:08:27.643925 9944 lock.go:41] attempting to write to file "/root/.minikube/apiserver.crt" with filemode -rw-r--r--
I1002 05:08:27.644018 9944 crypto.go:165] Writing key to /root/.minikube/apiserver.key ...
I1002 05:08:27.644034 9944 lock.go:41] attempting to write to file "/root/.minikube/apiserver.key" with filemode -rw-------
I1002 05:08:27.644105 9944 crypto.go:69] Generating cert /root/.minikube/proxy-client.crt with IP's: []
I1002 05:08:27.646241 9944 crypto.go:157] Writing cert to /root/.minikube/proxy-client.crt ...
I1002 05:08:27.646264 9944 lock.go:41] attempting to write to file "/root/.minikube/proxy-client.crt" with filemode -rw-r--r--
I1002 05:08:27.646396 9944 crypto.go:165] Writing key to /root/.minikube/proxy-client.key ...
I1002 05:08:27.646415 9944 lock.go:41] attempting to write to file "/root/.minikube/proxy-client.key" with filemode -rw-------
I1002 05:08:27.646516 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1002 05:08:27.646536 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1002 05:08:27.646548 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1002 05:08:27.646560 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1002 05:08:27.646574 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1002 05:08:27.646586 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1002 05:08:27.646600 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1002 05:08:27.646611 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1002 05:08:27.646705 9944 vm_assets.go:82] NewFileAsset: /root/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1002 05:08:27.647386 9944 exec_runner.go:40] Run: which openssl
I1002 05:08:27.648780 9944 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/minikubeCA.pem'
I1002 05:08:27.652840 9944 exec_runner.go:51] Run with output: openssl x509 -hash -noout -in '/usr/share/ca-certificates/minikubeCA.pem'
I1002 05:08:27.655712 9944 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/b5213941.0'

  • Relaunching Kubernetes using kubeadm ...
    I1002 05:08:27.660148 9944 kubeadm.go:396] RestartCluster start
    I1002 05:08:27.660165 9944 exec_runner.go:40] Run: sudo test -d /data/minikube
    I1002 05:08:27.663920 9944 kubeadm.go:216] /data/minikube check failed, skipping compat symlinks: running command: sudo test -d /data/minikube: exit status 1
    I1002 05:08:27.663955 9944 exec_runner.go:40] Run: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml
    I1002 05:08:27.706063 9944 exec_runner.go:40] Run: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml
    I1002 05:08:28.184577 9944 exec_runner.go:40] Run: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml
    I1002 05:08:28.228810 9944 exec_runner.go:40] Run: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml
    I1002 05:08:28.271506 9944 kubeadm.go:454] Waiting for apiserver process ...
    I1002 05:08:28.271535 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:28.278955 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:28.579139 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:28.586717 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:28.879121 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:28.886818 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:29.179137 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:29.186831 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:29.479195 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:29.487412 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:29.779175 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:29.786815 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:30.079126 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:30.087815 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:30.379292 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:30.386983 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:30.679144 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:30.687482 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:30.979149 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:30.986853 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:31.279147 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:31.286953 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:31.579168 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:31.586752 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:31.879134 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:31.886677 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:32.179121 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:32.186688 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:32.479131 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:32.486722 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1
    I1002 05:08:32.779132 9944 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    W1002 05:08:32.786917 9944 kubeadm.go:460] pgrep apiserver: running command: sudo pgrep kube-apiserver: exit status 1

@BaerSy
Copy link

BaerSy commented Oct 2, 2019

@tstromberg I couldn't find /root/.minikube/config/config.json in my installation although I followed the instructions to install binary version. Could you please advice how I can get this file?

@undsoul
Copy link
Author

undsoul commented Oct 3, 2019

@tstromberg
here you may find the logs ,
Thanx in adv

minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --alsologtostderr -v=8 ;

I1003 14:56:02.953529 7728 notify.go:125] Checking for updates...
I1003 14:56:03.127584 7728 start.go:236] hostinfo: {"hostname":"qlikadmin-VirtualBox","uptime":4488,"bootTime":1570099275,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"5.0.0-23-generic","virtualizationSystem":"vbox","virtualizationRole":"guest","hostid":"bbb538ed-a9bd-4ae0-9829-65909ef53ce4"}
I1003 14:56:03.128660 7728 start.go:246] virtualization: vbox guest
πŸ˜„ minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
I1003 14:56:03.129783 7728 cache_images.go:295] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1003 14:56:03.129825 7728 cache_images.go:301] /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I1003 14:56:03.129840 7728 cache_images.go:297] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 62.957Β΅s
I1003 14:56:03.129859 7728 cache_images.go:82] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I1003 14:56:03.129877 7728 cache_images.go:295] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
I1003 14:56:03.129921 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
I1003 14:56:03.129936 7728 cache_images.go:297] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 completed in 63.027Β΅s
I1003 14:56:03.129950 7728 cache_images.go:82] CacheImage k8s.gcr.io/kube-proxy:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
I1003 14:56:03.129969 7728 cache_images.go:295] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
I1003 14:56:03.129987 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
I1003 14:56:03.130003 7728 cache_images.go:297] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 completed in 37.401Β΅s
I1003 14:56:03.130015 7728 cache_images.go:82] CacheImage k8s.gcr.io/kube-scheduler:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
I1003 14:56:03.130032 7728 cache_images.go:295] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
I1003 14:56:03.130050 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
I1003 14:56:03.130061 7728 cache_images.go:297] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 completed in 32.54Β΅s
I1003 14:56:03.130098 7728 cache_images.go:82] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
I1003 14:56:03.130115 7728 cache_images.go:295] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
I1003 14:56:03.130136 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
I1003 14:56:03.130148 7728 cache_images.go:297] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 completed in 36.249Β΅s
I1003 14:56:03.130164 7728 cache_images.go:82] CacheImage k8s.gcr.io/kube-apiserver:v1.16.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
I1003 14:56:03.130181 7728 cache_images.go:295] CacheImage: k8s.gcr.io/pause:3.1 -> /root/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1003 14:56:03.130198 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I1003 14:56:03.130209 7728 cache_images.go:297] CacheImage: k8s.gcr.io/pause:3.1 -> /root/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 32.072Β΅s
I1003 14:56:03.130220 7728 cache_images.go:82] CacheImage k8s.gcr.io/pause:3.1 -> /root/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I1003 14:56:03.130259 7728 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1003 14:56:03.130281 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists
I1003 14:56:03.131050 7728 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 793.731Β΅s
I1003 14:56:03.131410 7728 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded
I1003 14:56:03.131006 7728 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1003 14:56:03.132184 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I1003 14:56:03.131018 7728 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1003 14:56:03.131024 7728 cache_images.go:295] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /root/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1003 14:56:03.131030 7728 cache_images.go:295] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1003 14:56:03.131034 7728 cache_images.go:295] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4
I1003 14:56:03.131038 7728 cache_images.go:295] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /root/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2
I1003 14:56:03.132533 7728 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 1.535794ms
I1003 14:56:03.133095 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists
I1003 14:56:03.133584 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
I1003 14:56:03.133972 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
I1003 14:56:03.134378 7728 cache_images.go:301] /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 exists
I1003 14:56:03.134914 7728 cache_images.go:301] /root/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 exists
I1003 14:56:03.135462 7728 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I1003 14:56:03.135983 7728 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 4.962909ms
I1003 14:56:03.136349 7728 cache_images.go:297] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /root/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 5.32424ms
I1003 14:56:03.136627 7728 cache_images.go:297] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 5.596749ms
I1003 14:56:03.137224 7728 cache_images.go:297] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 completed in 6.187311ms
I1003 14:56:03.137711 7728 cache_images.go:297] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /root/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 completed in 6.573533ms
I1003 14:56:03.138332 7728 cache_images.go:82] CacheImage k8s.gcr.io/kube-addon-manager:v9.0.2 -> /root/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 succeeded
I1003 14:56:03.137914 7728 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded
I1003 14:56:03.138287 7728 cache_images.go:82] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /root/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
I1003 14:56:03.138314 7728 cache_images.go:82] CacheImage k8s.gcr.io/coredns:1.6.2 -> /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
I1003 14:56:03.138324 7728 cache_images.go:82] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 succeeded
I1003 14:56:03.138365 7728 cache_images.go:89] Successfully cached all images.
I1003 14:56:03.138759 7728 profile.go:66] Saving config to /root/.minikube/profiles/minikube/config.json ...
I1003 14:56:03.138859 7728 lock.go:41] attempting to write to file "/root/.minikube/profiles/minikube/config.json" with filemode -rw-------
I1003 14:56:03.139415 7728 cluster.go:93] Machine does not exist... provisioning new machine
I1003 14:56:03.139442 7728 cluster.go:94] Provisioning machine with config: {KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.4.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true}
🀹 Running on localhost (CPUs=2, Memory=11269MB, Disk=51192MB) ...
ℹ️ OS release is Ubuntu 18.04.3 LTS
I1003 14:56:03.142617 7728 profile.go:66] Saving config to /root/.minikube/profiles/minikube/config.json ...
I1003 14:56:03.143142 7728 lock.go:41] attempting to write to file "/root/.minikube/profiles/minikube/config.json.tmp021263953" with filemode -rw-------
I1003 14:56:03.143999 7728 exec_runner.go:40] Run: sudo systemctl start docker
I1003 14:56:03.155126 7728 exec_runner.go:51] Run with output: docker version --format '{{.Server.Version}}'
🐳 Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
I1003 14:56:03.197539 7728 settings.go:124] acquiring lock: {Name:kubeconfigUpdate Clock:{} Delay:10s Timeout:0s Cancel:}
I1003 14:56:03.198154 7728 settings.go:132] Updating kubeconfig: /root/.kube/config
I1003 14:56:03.199585 7728 lock.go:41] attempting to write to file "/root/.kube/config" with filemode -rw-------
β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I1003 14:56:03.201154 7728 cache_images.go:95] LoadImages start: [k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 kubernetesui/dashboard:v2.0.0-beta4 k8s.gcr.io/kube-addon-manager:v9.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1]
I1003 14:56:03.201713 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
I1003 14:56:03.201742 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 -> /var/lib/minikube/images/kube-proxy_v1.16.0
I1003 14:56:03.205002 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1003 14:56:03.205548 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 -> /var/lib/minikube/images/storage-provisioner_v1.8.1
I1003 14:56:03.205260 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1003 14:56:03.206118 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1003 14:56:03.205270 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
I1003 14:56:03.206271 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 -> /var/lib/minikube/images/kube-scheduler_v1.16.0
I1003 14:56:03.205277 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
I1003 14:56:03.217921 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 -> /var/lib/minikube/images/kube-controller-manager_v1.16.0
I1003 14:56:03.205370 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1003 14:56:03.224864 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I1003 14:56:03.205371 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
I1003 14:56:03.237174 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 -> /var/lib/minikube/images/kube-apiserver_v1.16.0
I1003 14:56:03.205376 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1003 14:56:03.205382 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1003 14:56:03.205383 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1003 14:56:03.205389 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4
I1003 14:56:03.205388 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1003 14:56:03.205394 7728 cache_images.go:210] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2
I1003 14:56:03.262827 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 -> /var/lib/minikube/images/etcd_3.3.15-0
I1003 14:56:03.262905 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/pause_3.1 -> /var/lib/minikube/images/pause_3.1
I1003 14:56:03.272220 7728 docker.go:97] Loading image: /var/lib/minikube/images/pause_3.1
I1003 14:56:03.262922 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 -> /var/lib/minikube/images/coredns_1.6.2
I1003 14:56:03.262941 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 -> /var/lib/minikube/images/dashboard_v2.0.0-beta4
I1003 14:56:03.262955 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I1003 14:56:03.262969 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 -> /var/lib/minikube/images/kube-addon-manager_v9.0.2
I1003 14:56:03.290239 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/pause_3.1
I1003 14:56:03.456168 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache
I1003 14:56:03.456830 7728 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1003 14:56:03.456892 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1003 14:56:03.567529 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache
I1003 14:56:03.567782 7728 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I1003 14:56:03.567935 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I1003 14:56:03.685594 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 from cache
I1003 14:56:03.685693 7728 docker.go:97] Loading image: /var/lib/minikube/images/storage-provisioner_v1.8.1
I1003 14:56:03.685702 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v1.8.1
I1003 14:56:03.822549 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache
I1003 14:56:03.822582 7728 docker.go:97] Loading image: /var/lib/minikube/images/kube-scheduler_v1.16.0
I1003 14:56:03.822587 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.16.0
I1003 14:56:03.991058 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 from cache
I1003 14:56:03.991090 7728 docker.go:97] Loading image: /var/lib/minikube/images/kube-proxy_v1.16.0
I1003 14:56:03.991096 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.16.0
I1003 14:56:04.178754 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 from cache
I1003 14:56:04.178792 7728 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I1003 14:56:04.178799 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I1003 14:56:04.330773 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 from cache
I1003 14:56:04.330809 7728 docker.go:97] Loading image: /var/lib/minikube/images/kube-apiserver_v1.16.0
I1003 14:56:04.330815 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.16.0
I1003 14:56:04.570514 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 from cache
I1003 14:56:04.570546 7728 docker.go:97] Loading image: /var/lib/minikube/images/coredns_1.6.2
I1003 14:56:04.570552 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/coredns_1.6.2
I1003 14:56:04.697135 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 from cache
I1003 14:56:04.697206 7728 docker.go:97] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.16.0
I1003 14:56:04.697214 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.16.0
I1003 14:56:04.897402 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 from cache
I1003 14:56:04.897435 7728 docker.go:97] Loading image: /var/lib/minikube/images/dashboard_v2.0.0-beta4
I1003 14:56:04.897441 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/dashboard_v2.0.0-beta4
I1003 14:56:05.074616 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 from cache
I1003 14:56:05.074645 7728 docker.go:97] Loading image: /var/lib/minikube/images/kube-addon-manager_v9.0.2
I1003 14:56:05.074651 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-addon-manager_v9.0.2
I1003 14:56:05.240578 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 from cache
I1003 14:56:05.240609 7728 docker.go:97] Loading image: /var/lib/minikube/images/etcd_3.3.15-0
I1003 14:56:05.240615 7728 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/etcd_3.3.15-0
I1003 14:56:05.522543 7728 cache_images.go:236] Successfully loaded image /root/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 from cache
I1003 14:56:05.522570 7728 cache_images.go:119] Successfully loaded all cached images.
I1003 14:56:05.522575 7728 cache_images.go:120] LoadImages end
I1003 14:56:05.522700 7728 kubeadm.go:610] kubelet v1.16.0 config:
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --resolv-conf=/run/systemd/resolve/resolv.conf

[Install]
I1003 14:56:05.523032 7728 exec_runner.go:40] Run: pgrep kubelet && sudo systemctl stop kubelet
W1003 14:56:05.529305 7728 kubeadm.go:615] unable to stop kubelet: running command: pgrep kubelet && sudo systemctl stop kubelet: exit status 1
I1003 14:56:05.529973 7728 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm
I1003 14:56:05.529985 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/v1.16.0/kubeadm -> /var/lib/minikube/binaries/v1.16.0/kubeadm
I1003 14:56:05.530065 7728 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet
I1003 14:56:05.530081 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/cache/v1.16.0/kubelet -> /var/lib/minikube/binaries/v1.16.0/kubelet
I1003 14:56:05.620724 7728 exec_runner.go:40] Run: sudo systemctl daemon-reload && sudo systemctl start kubelet
I1003 14:56:05.798880 7728 certs.go:71] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:}
I1003 14:56:05.799025 7728 certs.go:79] Setting up /root/.minikube for IP: 10.0.2.15
I1003 14:56:05.799079 7728 crypto.go:69] Generating cert /root/.minikube/client.crt with IP's: []
I1003 14:56:05.801734 7728 crypto.go:157] Writing cert to /root/.minikube/client.crt ...
I1003 14:56:05.801934 7728 lock.go:41] attempting to write to file "/root/.minikube/client.crt" with filemode -rw-r--r--
I1003 14:56:05.802173 7728 crypto.go:165] Writing key to /root/.minikube/client.key ...
I1003 14:56:05.802369 7728 lock.go:41] attempting to write to file "/root/.minikube/client.key" with filemode -rw-------
I1003 14:56:05.809725 7728 crypto.go:69] Generating cert /root/.minikube/apiserver.crt with IP's: [127.0.0.1 10.0.2.15 10.96.0.1 10.0.0.1]
I1003 14:56:05.812123 7728 crypto.go:157] Writing cert to /root/.minikube/apiserver.crt ...
I1003 14:56:05.812171 7728 lock.go:41] attempting to write to file "/root/.minikube/apiserver.crt" with filemode -rw-r--r--
I1003 14:56:05.812523 7728 crypto.go:165] Writing key to /root/.minikube/apiserver.key ...
I1003 14:56:05.812633 7728 lock.go:41] attempting to write to file "/root/.minikube/apiserver.key" with filemode -rw-------
I1003 14:56:05.812837 7728 crypto.go:69] Generating cert /root/.minikube/proxy-client.crt with IP's: []
I1003 14:56:05.816140 7728 crypto.go:157] Writing cert to /root/.minikube/proxy-client.crt ...
I1003 14:56:05.817884 7728 lock.go:41] attempting to write to file "/root/.minikube/proxy-client.crt" with filemode -rw-r--r--
I1003 14:56:05.818363 7728 crypto.go:165] Writing key to /root/.minikube/proxy-client.key ...
I1003 14:56:05.818393 7728 lock.go:41] attempting to write to file "/root/.minikube/proxy-client.key" with filemode -rw-------
I1003 14:56:05.818615 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1003 14:56:05.818646 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1003 14:56:05.818666 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1003 14:56:05.818679 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1003 14:56:05.818690 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1003 14:56:05.818852 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1003 14:56:05.818967 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1003 14:56:05.818981 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1003 14:56:05.819308 7728 vm_assets.go:82] NewFileAsset: /root/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1003 14:56:05.820466 7728 exec_runner.go:40] Run: which openssl
I1003 14:56:05.822380 7728 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/minikubeCA.pem'
I1003 14:56:05.849333 7728 exec_runner.go:51] Run with output: openssl x509 -hash -noout -in '/usr/share/ca-certificates/minikubeCA.pem'
I1003 14:56:05.854030 7728 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/b5213941.0'
🚜 Pulling images ...
I1003 14:56:05.862093 7728 exec_runner.go:40] Run: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml
πŸš€ Launching Kubernetes ...
I1003 14:56:18.544294 7728 kubeadm.go:232] StartCluster: {KubernetesVersion:v1.16.0 NodeIP:10.0.2.15 NodePort:8443 NodeName:minikube APIServerName:localhost APIServerNames:[] APIServerIPs:[127.0.0.1] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:true EnableDefaultCNI:false}
I1003 14:56:18.544737 7728 exec_runner.go:51] Run with output: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
I1003 14:58:28.055021 7728 kubeadm.go:234] StartCluster complete in 2m9.510731784s
I1003 14:58:28.055255 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_kube-apiserver" --format="{{.ID}}"
I1003 14:58:28.106472 7728 logs.go:160] 0 containers: []
W1003 14:58:28.106555 7728 logs.go:162] No container was found matching "kube-apiserver"
I1003 14:58:28.106575 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_coredns" --format="{{.ID}}"
I1003 14:58:28.168512 7728 logs.go:160] 0 containers: []
W1003 14:58:28.168543 7728 logs.go:162] No container was found matching "coredns"
I1003 14:58:28.168557 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_kube-scheduler" --format="{{.ID}}"
I1003 14:58:28.219566 7728 logs.go:160] 0 containers: []
W1003 14:58:28.219609 7728 logs.go:162] No container was found matching "kube-scheduler"
I1003 14:58:28.219630 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_kube-proxy" --format="{{.ID}}"
I1003 14:58:28.265458 7728 logs.go:160] 0 containers: []
W1003 14:58:28.265488 7728 logs.go:162] No container was found matching "kube-proxy"
I1003 14:58:28.265503 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_kube-addon-manager" --format="{{.ID}}"
I1003 14:58:28.337694 7728 logs.go:160] 0 containers: []
W1003 14:58:28.337729 7728 logs.go:162] No container was found matching "kube-addon-manager"
I1003 14:58:28.337745 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_kubernetes-dashboard" --format="{{.ID}}"
I1003 14:58:28.546925 7728 logs.go:160] 0 containers: []
W1003 14:58:28.546981 7728 logs.go:162] No container was found matching "kubernetes-dashboard"
I1003 14:58:28.547051 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_storage-provisioner" --format="{{.ID}}"
I1003 14:58:28.614199 7728 logs.go:160] 0 containers: []
W1003 14:58:28.614235 7728 logs.go:162] No container was found matching "storage-provisioner"
I1003 14:58:28.614250 7728 exec_runner.go:51] Run with output: docker ps -a --filter="name=k8s_kube-controller-manager" --format="{{.ID}}"
I1003 14:58:28.688788 7728 logs.go:160] 0 containers: []
W1003 14:58:28.689002 7728 logs.go:162] No container was found matching "kube-controller-manager"
I1003 14:58:28.689014 7728 logs.go:78] Gathering logs for kubelet ...
I1003 14:58:28.689064 7728 exec_runner.go:51] Run with output: journalctl -u kubelet -n 200
I1003 14:58:28.716096 7728 logs.go:78] Gathering logs for dmesg ...
I1003 14:58:28.716142 7728 exec_runner.go:51] Run with output: sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 200
I1003 14:58:28.738390 7728 logs.go:78] Gathering logs for Docker ...
I1003 14:58:28.738412 7728 exec_runner.go:51] Run with output: sudo journalctl -u docker -n 200
I1003 14:58:28.760024 7728 logs.go:78] Gathering logs for container status ...
I1003 14:58:28.760058 7728 exec_runner.go:51] Run with output: sudo crictl ps -a || sudo docker ps -a
W1003 14:58:38.841561 7728 exit.go:101] Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

πŸ’£ Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose

minikube logs ;

==> Docker <==
-- Logs begin at Thu 2019-10-03 13:34:03 +03, end at Thu 2019-10-03 15:03:04 +03. --
Eki 03 13:59:56 qlikadmin-VirtualBox dockerd[6618]: time="2019-10-03T13:59:56.954972699+03:00" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
Eki 03 13:59:56 qlikadmin-VirtualBox systemd[1]: Stopped Docker Application Container Engine.
Eki 03 13:59:56 qlikadmin-VirtualBox systemd[1]: Starting Docker Application Container Engine...
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.081591264+03:00" level=info msg="systemd-resolved is running, so using resolvconf: /run/systemd/resolve/resolv.conf"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.082268603+03:00" level=info msg="parsed scheme: "unix"" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.082282482+03:00" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.082306876+03:00" level=info msg="parsed scheme: "unix"" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.082311897+03:00" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.083556586+03:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.083603183+03:00" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.083709228+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00013c340, CONNECTING" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.083931379+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00013c340, READY" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.083990161+03:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.084002984+03:00" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.084030017+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00013c740, CONNECTING" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.084108850+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00013c740, READY" module=grpc
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.094770168+03:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.095011252+03:00" level=warning msg="Your kernel does not support swap memory limit"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.095042474+03:00" level=warning msg="Your kernel does not support cgroup rt period"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.095049532+03:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.095057386+03:00" level=warning msg="Your kernel does not support cgroup blkio weight"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.095067230+03:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.095432656+03:00" level=info msg="Loading containers: start."
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.236078017+03:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.279328797+03:00" level=info msg="Loading containers: done."
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.315319444+03:00" level=info msg="Docker daemon" commit=039a7df graphdriver(s)=overlay2 version=18.09.9
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.315603055+03:00" level=info msg="Daemon has completed initialization"
Eki 03 13:59:57 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T13:59:57.326397809+03:00" level=info msg="API listen on /var/run/docker.sock"
Eki 03 13:59:57 qlikadmin-VirtualBox systemd[1]: Started Docker Application Container Engine.
Eki 03 14:03:38 qlikadmin-VirtualBox systemd[1]: Stopping Docker Application Container Engine...
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T14:03:38.412362648+03:00" level=info msg="Processing signal 'terminated'"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[7431]: time="2019-10-03T14:03:38.413134447+03:00" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
Eki 03 14:03:38 qlikadmin-VirtualBox systemd[1]: Stopped Docker Application Container Engine.
Eki 03 14:03:38 qlikadmin-VirtualBox systemd[1]: Starting Docker Application Container Engine...
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.573239748+03:00" level=info msg="systemd-resolved is running, so using resolvconf: /run/systemd/resolve/resolv.conf"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.575278662+03:00" level=info msg="parsed scheme: "unix"" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.575349276+03:00" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.575401862+03:00" level=info msg="parsed scheme: "unix"" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.575412920+03:00" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.575744617+03:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.575864244+03:00" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.575985624+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0000174d0, CONNECTING" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.576465487+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0000174d0, READY" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.577705094+03:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.577918106+03:00" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.578192141+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0000177c0, CONNECTING" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.578853189+03:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0000177c0, READY" module=grpc
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.590709730+03:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.591245005+03:00" level=warning msg="Your kernel does not support swap memory limit"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.591289570+03:00" level=warning msg="Your kernel does not support cgroup rt period"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.591298053+03:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.591309792+03:00" level=warning msg="Your kernel does not support cgroup blkio weight"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.591316465+03:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.591761746+03:00" level=info msg="Loading containers: start."
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.766809341+03:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.830123363+03:00" level=info msg="Loading containers: done."
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.962945370+03:00" level=info msg="Docker daemon" commit=039a7df graphdriver(s)=overlay2 version=18.09.9
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.963434475+03:00" level=info msg="Daemon has completed initialization"
Eki 03 14:03:38 qlikadmin-VirtualBox dockerd[9375]: time="2019-10-03T14:03:38.976106128+03:00" level=info msg="API listen on /var/run/docker.sock"
Eki 03 14:03:38 qlikadmin-VirtualBox systemd[1]: Started Docker Application Container Engine.

==> container status <==
time="2019-10-03T15:03:14+03:00" level=fatal msg="failed to connect: failed to connect: context deadline exceeded"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94963b5050b2 hello-world "/hello" About an hour ago Exited (0) About an hour ago elated_dirac
3794fdfaa908 hello-world "/hello" About an hour ago Exited (0) About an hour ago objective_carson

==> dmesg <==
[Eki 3 13:41] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +1,048841] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 1
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 2
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 3
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 4
[ +0,000000] platform eisa.0: Cannot allocate resource for EISA slot 5
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 6
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 7
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 8
[ +2,890171] vgdrvHeartbeatInit: Setting up heartbeat to trigger every 2000 milliseconds
[ +0,032530] vboxguest: misc device minor 54, IRQ 20, I/O port d040, MMIO at 00000000f1400000 (size 0x400000)
[ +0,192436] [drm:vmw_host_log [vmwgfx]] ERROR Failed to send host log message.
[ +0,001858] [drm:vmw_host_log [vmwgfx]] ERROR Failed to send host log message.
[Eki 3 13:53] kauditd_printk_skb: 28 callbacks suppressed

==> kernel <==
15:03:14 up 1:21, 1 user, load average: 0,18, 0,87, 0,82
Linux qlikadmin-VirtualBox 5.0.0-23-generic #24~18.04.1-Ubuntu SMP Mon Jul 29 16:12:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 18.04.3 LTS"

==> kubelet <==
-- Logs begin at Thu 2019-10-03 13:34:03 +03, end at Thu 2019-10-03 15:03:14 +03. --
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.795794 9753 server.go:644] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796134 9753 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796155 9753 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796274 9753 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796281 9753 container_manager_linux.go:305] Creating device plugin manager: true
E1003 15:03:14.892604 8920 style.go:167] unable to parse "Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796301 9753 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x1b6b070 0x799d338 0x1b6ba70 map[] map[] map[] map[] map[] 0xc0009d84e0 [0] 0x799d338}\n": template: Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796301 9753 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x1b6b070 0x799d338 0x1b6ba70 map[] map[] map[] map[] map[] 0xc0009d84e0 [0] 0x799d338}
:1: unexpected "}" in command - returning raw string.
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796301 9753 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x1b6b070 0x799d338 0x1b6ba70 map[] map[] map[] map[] map[] 0xc0009d84e0 [0] 0x799d338}
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796325 9753 state_mem.go:36] [cpumanager] initializing new in-memory state store
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796387 9753 state_mem.go:84] [cpumanager] updated default cpuset: ""
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796393 9753 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
E1003 15:03:14.893932 8920 style.go:167] unable to parse "Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796399 9753 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x799d338 10000000000 0xc000989860 map[memory:{{104857600 0} {} BinarySI}]}\n": template: Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796399 9753 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x799d338 10000000000 0xc000989860 map[memory:{{104857600 0} {} BinarySI}]}
:1: unexpected "}" in operand - returning raw string.
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796399 9753 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x799d338 10000000000 0xc000989860 map[memory:{{104857600 0} {} BinarySI}]}
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796474 9753 kubelet.go:287] Adding pod path: /etc/kubernetes/manifests
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.796496 9753 kubelet.go:312] Watching apiserver
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: E1003 15:03:13.807557 9753 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: E1003 15:03:13.808290 9753 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: E1003 15:03:13.808703 9753 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.814474 9753 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.814500 9753 client.go:104] Start docker client with request timeout=2m0s
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: W1003 15:03:13.816190 9753 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.816401 9753 docker_service.go:240] Hairpin mode set to "hairpin-veth"
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: W1003 15:03:13.816578 9753 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.821590 9753 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: I1003 15:03:13.841637 9753 docker_service.go:260] Docker Info: &{ID:ZTKG:R6TM:3DGH:NHQY:4MUC:LXNE:BM37:DUJI:OK73:WKE2:LEBP:BPKV Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-10-03T15:03:13.822957027+03:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.0.0-23-generic OperatingSystem:Ubuntu 18.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0006f2770 NCPU:2 MemTotal:11817152512 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:qlikadmin-VirtualBox Labels:[] ExperimentalBuild:false ServerVersion:18.09.9 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:894b81a4b802e4eb2a91d1ce216b8817763c29fb Expected:894b81a4b802e4eb2a91d1ce216b8817763c29fb} RuncCommit:{ID:425e105d5a03fabd737a126ad93d62a9eeede87f Expected:425e105d5a03fabd737a126ad93d62a9eeede87f} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[WARNING: No swap limit support]}
Eki 03 15:03:13 qlikadmin-VirtualBox kubelet[9753]: F1003 15:03:13.841714 9753 server.go:271] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Eki 03 15:03:13 qlikadmin-VirtualBox systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Eki 03 15:03:13 qlikadmin-VirtualBox systemd[1]: kubelet.service: Failed with result 'exit-code'.
Eki 03 15:03:14 qlikadmin-VirtualBox systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Eki 03 15:03:14 qlikadmin-VirtualBox systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 413.
Eki 03 15:03:14 qlikadmin-VirtualBox systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Eki 03 15:03:14 qlikadmin-VirtualBox systemd[1]: Started kubelet: The Kubernetes Node Agent.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.746770 9839 server.go:410] Version: v1.16.0
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.747005 9839 plugins.go:100] No cloud provider specified.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.841290 9839 server.go:644] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.841876 9839 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.842019 9839 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.842600 9839 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.842715 9839 container_manager_linux.go:305] Creating device plugin manager: true
E1003 15:03:14.899769 8920 style.go:167] unable to parse "Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.842868 9839 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x1b6b070 0x799d338 0x1b6ba70 map[] map[] map[] map[] map[] 0xc000039aa0 [0] 0x799d338}\n": template: Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.842868 9839 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x1b6b070 0x799d338 0x1b6ba70 map[] map[] map[] map[] map[] 0xc000039aa0 [0] 0x799d338}
:1: unexpected "}" in command - returning raw string.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.842868 9839 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x1b6b070 0x799d338 0x1b6ba70 map[] map[] map[] map[] map[] 0xc000039aa0 [0] 0x799d338}
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.842991 9839 state_mem.go:36] [cpumanager] initializing new in-memory state store
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.843164 9839 state_mem.go:84] [cpumanager] updated default cpuset: ""
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.843262 9839 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
E1003 15:03:14.900678 8920 style.go:167] unable to parse "Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.843607 9839 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x799d338 10000000000 0xc00015ef60 map[memory:{{104857600 0} {} BinarySI}]}\n": template: Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.843607 9839 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x799d338 10000000000 0xc00015ef60 map[memory:{{104857600 0} {} BinarySI}]}
:1: unexpected "}" in operand - returning raw string.
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.843607 9839 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x799d338 10000000000 0xc00015ef60 map[memory:{{104857600 0} {} BinarySI}]}
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.843843 9839 kubelet.go:287] Adding pod path: /etc/kubernetes/manifests
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.843964 9839 kubelet.go:312] Watching apiserver
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: E1003 15:03:14.856556 9839 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: E1003 15:03:14.856672 9839 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: E1003 15:03:14.856737 9839 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.859961 9839 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.859985 9839 client.go:104] Start docker client with request timeout=2m0s
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: W1003 15:03:14.862545 9839 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.862563 9839 docker_service.go:240] Hairpin mode set to "hairpin-veth"
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: W1003 15:03:14.862630 9839 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Eki 03 15:03:14 qlikadmin-VirtualBox kubelet[9839]: I1003 15:03:14.871792 9839 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op

@cytown
Copy link

cytown commented Oct 12, 2019

same problem, under debian 10.0...

@RobDavenport
Copy link

RobDavenport commented Nov 5, 2019

Also having this same problem here, on a chromebook. Main issue is it looks like it's having trouble getting the kube-apiserver up and running.

Getting this repeating tons of times:

stderr: : exit status 1 cmd: sudo pgrep kube-apiserver
I1105 16:10:42.608973   17406 exec_runner.go:42] (ExecRunner) Run:  sudo pgrep kube-apiserver
I1105 16:10:42.682309   17406 exec_runner.go:74] (ExecRunner) Non-zero exit: sudo pgrep kube-apiserver: exit status 1 (73.207877ms)
W1105 16:10:42.682469   17406 kubeadm.go:502] pgrep apiserver: command failed: sudo pgrep kube-apiserver
stdout: 
stderr: : exit status 1 cmd: sudo pgrep kube-apiserver
I1105 16:10:42.910070   17406 exec_runner.go:42] (ExecRunner) Run:  sudo pgrep kube-apiserver
I1105 16:10:42.964500   17406 exec_runner.go:74] (ExecRunner) Non-zero exit: sudo pgrep kube-apiserver: exit status 1 (53.537555ms)
W1105 16:10:42.987164   17406 kubeadm.go:502] pgrep apiserver: command failed: sudo pgrep kube-apiserver

Logs look similar to the ones posted by others, except running on Debian 9.8 instead of ubuntu.

Full output with the command:

sudo minikube start --vm-driver=none
πŸ˜„  minikube v1.5.2 on Debian 9.8
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸƒ  Using the running none "minikube" VM ...
βŒ›  Waiting for the host to be provisioned ...
⚠️  VM may be unable to resolve external DNS records
🐳  Preparing Kubernetes v1.16.2 on Docker '19.03.4' ...
    β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
πŸ”„  Relaunching Kubernetes using kubeadm ... 

πŸ’£  Error restarting cluster: waiting for apiserver: apiserver process never appeared

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

Logs output:

sudo minikube logs
==> Docker <==
-- Logs begin at Tue 2019-11-05 15:41:27 JST, end at Tue 2019-11-05 16:36:50 JST. --
Nov 05 15:41:27 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:41:27 penguin systemd[1]: Starting Docker Application Container Engine...
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.310513749+09:00" level=info msg="Starting up"
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.324012129+09:00" level=info msg="parsed scheme: \"unix\"" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.325338224+09:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.325650269+09:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.325977889+09:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.331246248+09:00" level=info msg="parsed scheme: \"unix\"" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.334108973+09:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.334432445+09:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.334710424+09:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.347865469+09:00" level=info msg="[graphdriver] using prior storage driver: btrfs"
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.508190431+09:00" level=warning msg="Your kernel does not support swap memory limit"
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.508774475+09:00" level=info msg="Loading containers: start."
Nov 05 15:41:29 penguin dockerd[97]: time="2019-11-05T15:41:29.320285858+09:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 05 15:41:29 penguin dockerd[97]: time="2019-11-05T15:41:29.494217274+09:00" level=info msg="Loading containers: done."
Nov 05 15:41:29 penguin dockerd[97]: time="2019-11-05T15:41:29.551156303+09:00" level=info msg="Docker daemon" commit=9013bf583a graphdriver(s)=btrfs version=19.03.4
Nov 05 15:41:29 penguin dockerd[97]: time="2019-11-05T15:41:29.551292898+09:00" level=info msg="Daemon has completed initialization"
Nov 05 15:41:29 penguin dockerd[97]: time="2019-11-05T15:41:29.603663453+09:00" level=info msg="API listen on /var/run/docker.sock"
Nov 05 15:41:29 penguin systemd[1]: Started Docker Application Container Engine.
Nov 05 15:41:38 penguin dockerd[97]: time="2019-11-05T15:41:38.994031627+09:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 05 15:41:39 penguin dockerd[97]: time="2019-11-05T15:41:39.128646363+09:00" level=warning msg="c9bae9a6ca35ae9187468450504ba36192be253cd9123d18c81c6465e7dedd3f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c9bae9a6ca35ae9187468450504ba36192be253cd9123d18c81c6465e7dedd3f/mounts/shm, flags: 0x2: no such file or directory"
Nov 05 15:42:03 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 15:42:03 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:43:30 penguin dockerd[97]: time="2019-11-05T15:43:30.187035004+09:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Nov 05 15:43:30 penguin dockerd[97]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Nov 05 15:45:19 penguin dockerd[97]: time="2019-11-05T15:45:19.600935282+09:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Nov 05 15:45:19 penguin dockerd[97]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Nov 05 15:49:05 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 15:49:05 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:53:35 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 15:53:35 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:53:59 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 15:53:59 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:54:15 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 15:54:15 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:58:36 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 15:58:36 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:58:55 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 15:58:55 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 16:04:47 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 16:04:47 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 16:09:09 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 16:09:09 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 16:09:53 penguin dockerd[97]: time="2019-11-05T16:09:53.025082633+09:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Nov 05 16:09:53 penguin dockerd[97]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Nov 05 16:10:36 penguin dockerd[97]: time="2019-11-05T16:10:36.958639871+09:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Nov 05 16:10:36 penguin dockerd[97]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Nov 05 16:11:43 penguin dockerd[97]: time="2019-11-05T16:11:43.036814410+09:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Nov 05 16:11:43 penguin dockerd[97]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Nov 05 16:11:43 penguin dockerd[97]: time="2019-11-05T16:11:43.052834162+09:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Nov 05 16:11:43 penguin dockerd[97]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Nov 05 16:15:21 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 16:15:21 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 16:18:15 penguin dockerd[97]: time="2019-11-05T16:18:15.985207063+09:00" level=error msg="Handler for GET /images/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Nov 05 16:18:15 penguin dockerd[97]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
Nov 05 16:29:29 penguin dockerd[97]: time="2019-11-05T16:29:29.553126989+09:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 05 16:29:29 penguin dockerd[97]: time="2019-11-05T16:29:29.693053166+09:00" level=warning msg="c2fed78a97dc7f24cb193571e71a3b2007d8b47a08b94096927948f6992022ea cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c2fed78a97dc7f24cb193571e71a3b2007d8b47a08b94096927948f6992022ea/mounts/shm, flags: 0x2: no such file or directory"
Nov 05 16:29:55 penguin systemd[1]: docker.service: Failed to reset devices.list: Operation not permitted
Nov 05 16:29:55 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted

==> container status <==
sudo: crictl: command not found
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                           PORTS               NAMES
c2fed78a97dc        hello-world         "/hello"            7 minutes ago       Exited (0) 7 minutes ago                             elated_dewdney
c9bae9a6ca35        hello-world         "/hello"            55 minutes ago      Exited (0) 55 minutes ago                            lucid_curran
7b4b388b7af3        hello-world         "/hello"            About an hour ago   Exited (0) About an hour ago                         unruffled_benz
ac5e981c0b79        ubuntu              "bash"              2 hours ago         Exited (130) About an hour ago                       optimistic_brahmagupta
b1c66ed7c639        hello-world         "/hello"            2 hours ago         Exited (0) 2 hours ago                               jolly_swartz
1da62ac133d4        hello-world         "/hello"            2 hours ago         Exited (0) 2 hours ago                               nice_clarke
ecfd41557c92        hello-world         "/hello"            2 hours ago         Exited (0) 2 hours ago                               stupefied_fermat
66f9d69a8fd5        hello-world         "/hello"            2 hours ago         Exited (0) 2 hours ago                               wizardly_yalow
5e531c27e58e        hello-world         "/hello"            2 hours ago         Exited (0) 2 hours ago                               laughing_murdock
cff548281e55        hello-world         "/hello"            3 hours ago         Exited (0) 3 hours ago                               determined_sutherland
1a6e4d8dbc72        hello-world         "/hello"            3 hours ago         Exited (0) 3 hours ago                               determined_mcnulty

==> dmesg <==
[Nov 5 15:41] ACPI BIOS Error (bug): A valid RSDP was not found (20180810/tbxfroot-210)
[  +0.054638] [Firmware Bug]: CPU1: APIC id mismatch. Firmware: 1 APIC: 3
[  +0.448256] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[  +0.009004]  #2
[  -0.457260] [Firmware Bug]: CPU2: APIC id mismatch. Firmware: 2 APIC: 3
[  +0.470260]  #3
[  +0.472090] rtc_cmos rtc_cmos: only 24-hr supported
[  +4.755444] cgroup: lxd (295) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.
[  +0.001613] cgroup: "memory" requires setting use_hierarchy to 1 on the root
[  +0.124775] new mount options do not match the existing superblock, will be ignored
[Nov 5 15:44] hrtimer: interrupt took 12247435 ns

==> kernel <==
 16:36:50 up 55 min,  0 users,  load average: 0.14, 0.27, 0.27
Linux penguin 4.19.69-06666-g6c4f8cbba24e #1 SMP PREEMPT Fri Sep 6 22:15:24 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"

==> kubelet <==
-- Logs begin at Tue 2019-11-05 15:41:27 JST, end at Tue 2019-11-05 16:36:50 JST. --
Nov 05 16:36:32 penguin kubelet[637]: E1105 16:36:32.085582     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:33 penguin kubelet[637]: E1105 16:36:33.067141     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:33 penguin kubelet[637]: E1105 16:36:33.080880     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:33 penguin kubelet[637]: E1105 16:36:33.088038     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:34 penguin kubelet[637]: E1105 16:36:34.068773     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:34 penguin kubelet[637]: E1105 16:36:34.082910     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:34 penguin kubelet[637]: E1105 16:36:34.089376     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:35 penguin kubelet[637]: E1105 16:36:35.072586     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:35 penguin kubelet[637]: E1105 16:36:35.086743     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:35 penguin kubelet[637]: E1105 16:36:35.092473     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:36 penguin kubelet[637]: E1105 16:36:36.085550     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:36 penguin kubelet[637]: E1105 16:36:36.089893     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:36 penguin kubelet[637]: E1105 16:36:36.106295     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:37 penguin kubelet[637]: E1105 16:36:37.096978     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:37 penguin kubelet[637]: E1105 16:36:37.098654     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:37 penguin kubelet[637]: E1105 16:36:37.124082     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:38 penguin kubelet[637]: E1105 16:36:38.100017     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:38 penguin kubelet[637]: E1105 16:36:38.102490     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:38 penguin kubelet[637]: E1105 16:36:38.125701     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:39 penguin kubelet[637]: E1105 16:36:39.106283     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:39 penguin kubelet[637]: E1105 16:36:39.109255     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:39 penguin kubelet[637]: E1105 16:36:39.129730     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:40 penguin kubelet[637]: E1105 16:36:40.125763     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:40 penguin kubelet[637]: E1105 16:36:40.128297     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:40 penguin kubelet[637]: E1105 16:36:40.134645     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:41 penguin kubelet[637]: E1105 16:36:41.140980     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:41 penguin kubelet[637]: E1105 16:36:41.141001     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:41 penguin kubelet[637]: E1105 16:36:41.143342     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:42 penguin kubelet[637]: E1105 16:36:42.147380     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:42 penguin kubelet[637]: E1105 16:36:42.148707     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:42 penguin kubelet[637]: E1105 16:36:42.149416     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:43 penguin kubelet[637]: E1105 16:36:43.150368     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:43 penguin kubelet[637]: E1105 16:36:43.152186     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:43 penguin kubelet[637]: E1105 16:36:43.152964     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:44 penguin kubelet[637]: E1105 16:36:44.153890     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:44 penguin kubelet[637]: E1105 16:36:44.158042     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:44 penguin kubelet[637]: E1105 16:36:44.162269     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:45 penguin kubelet[637]: E1105 16:36:45.171945     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:45 penguin kubelet[637]: E1105 16:36:45.172874     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:45 penguin kubelet[637]: E1105 16:36:45.174250     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:46 penguin kubelet[637]: E1105 16:36:46.179485     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:46 penguin kubelet[637]: E1105 16:36:46.181233     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:46 penguin kubelet[637]: E1105 16:36:46.183621     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:47 penguin kubelet[637]: E1105 16:36:47.185044     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:47 penguin kubelet[637]: E1105 16:36:47.186916     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:47 penguin kubelet[637]: E1105 16:36:47.190588     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:48 penguin kubelet[637]: E1105 16:36:48.192286     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:48 penguin kubelet[637]: E1105 16:36:48.192680     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:48 penguin kubelet[637]: E1105 16:36:48.193071     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:49 penguin kubelet[637]: E1105 16:36:49.195959     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:49 penguin kubelet[637]: E1105 16:36:49.207520     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:49 penguin kubelet[637]: E1105 16:36:49.210055     637 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Nov 05 16:36:49 penguin kubelet[637]: E1105 16:36:49.670556     637 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Nov 05 16:36:49 penguin kubelet[637]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Nov 05 16:36:49 penguin kubelet[637]: I1105 16:36:49.724303     637 kuberuntime_manager.go:207] Container runtime docker initialized, version: 19.03.4, apiVersion: 1.40.0
Nov 05 16:36:49 penguin kubelet[637]: I1105 16:36:49.749335     637 server.go:1065] Started kubelet
Nov 05 16:36:49 penguin kubelet[637]: F1105 16:36:49.751092     637 kubelet.go:1413] failed to start OOM watcher open /dev/kmsg: no such file or directory
Nov 05 16:36:49 penguin systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Nov 05 16:36:49 penguin systemd[1]: kubelet.service: Unit entered failed state.
Nov 05 16:36:49 penguin systemd[1]: kubelet.service: Failed with result 'exit-code'.

@nanikjava
Copy link
Contributor

nanikjava commented Nov 5, 2019

Seems like it there is a permission issue with docker

Nov 05 15:41:27 penguin systemd[1]: docker.service: Failed to set invocation ID on control group /system.slice/docker.service, ignoring: Operation not permitted
Nov 05 15:41:27 penguin systemd[1]: Starting Docker Application Container Engine...
Nov 05 15:41:28 penguin dockerd[97]: time="2019-11-05T15:41:28.310513749+09:00" level=

Not sure what to do as never tried with none driver before.

Has this ever worked before ? if it has then maybe check the account setup for the docker service ?

@BrianChristie
Copy link

BrianChristie commented Nov 11, 2019

I'm also experiencing the repeating pgrep error. This is on Mac OS 10.15 with HyperKit.

I'm not clear, is there additional troubleshooting information that would be helpful here? I haven't been about to figure out anything so far.

I1111 18:18:07.469877   11116 ssh_runner.go:139] (SSHRunner) Non-zero exit: sudo pgrep kube-apiserver: Process exited with status 1 (936.021Β΅s)
W1111 18:18:07.469902   11116 kubeadm.go:502] pgrep apiserver: Process exited with status 1 cmd: sudo pgrep kube-apiserver

@abaranchuk
Copy link

I'm also experiencing the repeating pgrep error. This is on Mac OS 10.15 with HyperKit.

I'm not clear, is there additional troubleshooting information that would be helpful here? I haven't been about to figure out anything so far.

I1111 18:18:07.469877   11116 ssh_runner.go:139] (SSHRunner) Non-zero exit: sudo pgrep kube-apiserver: Process exited with status 1 (936.021Β΅s)
W1111 18:18:07.469902   11116 kubeadm.go:502] pgrep apiserver: Process exited with status 1 cmd: sudo pgrep kube-apiserver

The same on Win10

@tstromberg
Copy link
Contributor

@BrianChristie @abaranchuk - The repeated pgrep is expected behavior, and will be improved in the next release. Is there a particular error you are trying to fix? I recommend a new issue, since this one is specific to the none driver.

@rusenask
Copy link

Got the same issue (yesterday Minikube just stopped working), apparently it was docker snap installing itself and then filling the disk with logs (https://www.reddit.com/r/Ubuntu/comments/dwoizd/docker_snap_just_installed_all_by_itself/)

@nanw1103
Copy link

Same error here. docker 18.09.9, VMware Photon OS 3, minikube 1.5.2.

@kirksw
Copy link

kirksw commented Nov 17, 2019

Same error here also. Docker 19.03.2, Ubuntu 19.10 (POP OS), Minikube 1.5.2

@cdfairchild
Copy link

Same error here also.

C:\WINDOWS\system32>minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube v switch"

  • minikube v1.5.2 on Microsoft Windows 10 Pro 10.0.18362 Build 18362
  • Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
  • Using the running hyperv "minikube" VM ...
  • Waiting for the host to be provisioned ...
  • Preparing Kubernetes v1.16.2 on Docker '18.09.9' ...
  • Relaunching Kubernetes using kubeadm ...

X Error restarting cluster: waiting for apiserver: apiserver process never appeared
*

@cdfairchild
Copy link

I ran minikube delete and then minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube v switch" and things started just fine.

@qq516249940
Copy link

I ran minikube delete and then minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube v switch" and things started just fine.

I run the command minikube delete resloved problem!

@diegosasw
Copy link

Is specifying the switch for hyper V something required? If so, why?

@magnologan
Copy link

Same error here. CentOS 7.7 and minikube 1.6.2.

Here's the output of minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --alsologtostderr -v=8

I0121 16:04:51.188866   64363 notify.go:125] Checking for updates...
I0121 16:04:51.557841   64363 start.go:255] hostinfo: {"hostname":"localhost.localdomain","uptime":3143,"bootTime":1579637548,"procs":343,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.7.1908","kernelVersion":"3.10.0-1062.el7.x86_64","virtualizationSystem":"","virtualizationRole":"","hostid":"43864d56-bde9-dd19-405e-def8daf2246b"}
I0121 16:04:51.558638   64363 start.go:265] virtualization:  
πŸ˜„  minikube v1.6.2 on Centos 7.7.1908
I0121 16:04:51.558996   64363 start.go:555] selectDriver: flag="none", old=&{minikube false false https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso 2000 2 20000 none docker  [] [] [] [] 192.168.99.1/24  default qemu:///system false false <nil> [] false [] /nfsshares  false false true {v1.17.0 192.168.190.128 8443 minikube minikubeCA [] [] cluster.local docker    10.96.0.0/12  [] false false} virtio virtio}
I0121 16:04:51.559175   64363 global.go:60] Querying for installed drivers using PATH=/root/.minikube/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
I0121 16:04:51.559292   64363 global.go:68] none priority: 2, state: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0121 16:04:51.559379   64363 global.go:68] virtualbox priority: 4, state: {Installed:false Healthy:false Error:unable to find VBoxManage in $PATH Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0121 16:04:51.559414   64363 global.go:68] vmware priority: 5, state: {Installed:false Healthy:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0121 16:04:51.559440   64363 global.go:68] kvm2 priority: 6, state: {Installed:false Healthy:false Error:exec: "virsh": executable file not found in $PATH Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I0121 16:04:51.559454   64363 driver.go:109] requested: "none"
I0121 16:04:51.559483   64363 driver.go:113] choosing "none" because it was requested
I0121 16:04:51.559491   64363 driver.go:146] Picked: none
I0121 16:04:51.559497   64363 driver.go:147] Alternatives: []
✨  Selecting 'none' driver from user configuration (alternates: [])
I0121 16:04:51.559574   64363 start.go:297] selected driver: none
I0121 16:04:51.559582   64363 start.go:585] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:<nil> DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true KubernetesConfig:{KubernetesVersion:v1.17.0 NodeIP:192.168.190.128 NodePort:8443 NodeName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false} HostOnlyNicType:virtio NatNicType:virtio}
I0121 16:04:51.559625   64363 start.go:591] status for none: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0121 16:04:51.560357   64363 profile.go:89] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0121 16:04:51.560610   64363 cluster.go:102] Skipping create...Using existing machine configuration
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I0121 16:04:51.561023   64363 none.go:257] checking for running kubelet ...
I0121 16:04:51.561053   64363 exec_runner.go:43] Run: systemctl is-active --quiet service kubelet
I0121 16:04:51.567397   64363 none.go:127] kubelet not running: check kubelet: systemctl is-active --quiet service kubelet: exit status 3
stdout:

stderr:
I0121 16:04:51.567412   64363 cluster.go:114] Machine state:  Stopped
πŸ”„  Starting existing none VM for "minikube" ...
I0121 16:04:51.569170   64363 cluster.go:132] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com}
βŒ›  Waiting for the host to be provisioned ...
I0121 16:04:51.569260   64363 cluster.go:145] configureHost: &{BaseDriver:0xc0002a2a00 CommonDriver:<nil> URL:tcp://192.168.190.128:2376 runtime:0xc0006d9960 exec:0x24d05d0}
I0121 16:04:51.569276   64363 cluster.go:164] none is a local driver, skipping auth/time setup
I0121 16:04:51.569285   64363 cluster.go:147] configureHost completed within 26.854Β΅s
I0121 16:04:51.569720   64363 exec_runner.go:43] Run: nslookup kubernetes.io
I0121 16:04:51.777065   64363 exec_runner.go:43] Run: curl -sS https://k8s.gcr.io/
I0121 16:04:52.145482   64363 profile.go:89] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0121 16:04:52.146010   64363 exec_runner.go:43] Run: sudo systemctl start docker
I0121 16:04:52.291110   64363 exec_runner.go:43] Run: docker version --format '{{.Server.Version}}'
🐳  Preparing Kubernetes v1.17.0 on Docker '1.13.1' ...
I0121 16:04:52.314922   64363 settings.go:123] acquiring lock: {Name:mk19004591210340446308469f521c5cfa3e1599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:52.315482   64363 settings.go:131] Updating kubeconfig:  /root/.kube/config
I0121 16:04:52.316719   64363 lock.go:35] WriteFile acquiring /root/.kube/config: {Name:mk72a1487fd2da23da9e8181e16f352a6105bd56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:52.317164   64363 kubeadm.go:662] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.190.128 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.17.0 NodeIP:192.168.190.128 NodePort:8443 NodeName:minikube APIServerName:localhost APIServerNames:[] APIServerIPs:[127.0.0.1] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false}
I0121 16:04:52.317208   64363 exec_runner.go:43] Run: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet"
W0121 16:04:52.352441   64363 kubeadm.go:667] unable to stop kubelet: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet": exit status 1
stdout:

stderr:
 command: "/bin/bash -c \"pgrep kubelet && sudo systemctl stop kubelet\"" output: ""
I0121 16:04:52.353080   64363 cache_binaries.go:74] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm
I0121 16:04:52.353110   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/cache/v1.17.0/kubeadm -> /var/lib/minikube/binaries/v1.17.0/kubeadm
I0121 16:04:52.353127   64363 cache_binaries.go:74] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet
I0121 16:04:52.353139   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/cache/v1.17.0/kubelet -> /var/lib/minikube/binaries/v1.17.0/kubelet
I0121 16:04:52.415654   64363 exec_runner.go:43] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl start kubelet"
I0121 16:04:53.154603   64363 certs.go:65] Setting up /root/.minikube for IP: 192.168.190.128
I0121 16:04:53.154639   64363 certs.go:74] acquiring lock: {Name:mk93c80512a54d9940030ea6da822bcdf491352e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:53.155067   64363 crypto.go:69] Generating cert /root/.minikube/client.crt with IP's: []
I0121 16:04:53.157605   64363 crypto.go:157] Writing cert to /root/.minikube/client.crt ...
I0121 16:04:53.157646   64363 lock.go:35] WriteFile acquiring /root/.minikube/client.crt: {Name:mk061e272a59a4a72388b0d6272ab1df9bf2f30b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:53.157925   64363 crypto.go:165] Writing key to /root/.minikube/client.key ...
I0121 16:04:53.157967   64363 lock.go:35] WriteFile acquiring /root/.minikube/client.key: {Name:mkdddc8a89c079da0e37236baa07f299ad8f8c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:53.158183   64363 crypto.go:69] Generating cert /root/.minikube/apiserver.crt with IP's: [127.0.0.1 192.168.190.128 10.96.0.1 10.0.0.1]
I0121 16:04:53.161214   64363 crypto.go:157] Writing cert to /root/.minikube/apiserver.crt ...
I0121 16:04:53.161256   64363 lock.go:35] WriteFile acquiring /root/.minikube/apiserver.crt: {Name:mkbe836a1ba52c2fffe1b447538867318040a474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:53.161519   64363 crypto.go:165] Writing key to /root/.minikube/apiserver.key ...
I0121 16:04:53.161533   64363 lock.go:35] WriteFile acquiring /root/.minikube/apiserver.key: {Name:mkb39cccbfc73367005382a94dc6d171ace63763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:53.161758   64363 crypto.go:69] Generating cert /root/.minikube/proxy-client.crt with IP's: []
I0121 16:04:53.164156   64363 crypto.go:157] Writing cert to /root/.minikube/proxy-client.crt ...
I0121 16:04:53.164193   64363 lock.go:35] WriteFile acquiring /root/.minikube/proxy-client.crt: {Name:mk4f514d97aa2382a97a33ef62b25fe4dde73336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:53.164434   64363 crypto.go:165] Writing key to /root/.minikube/proxy-client.key ...
I0121 16:04:53.164471   64363 lock.go:35] WriteFile acquiring /root/.minikube/proxy-client.key: {Name:mkadfb0cec838e418c64011144506609379d3602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0121 16:04:53.164692   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0121 16:04:53.164736   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0121 16:04:53.164746   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0121 16:04:53.164785   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0121 16:04:53.164800   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0121 16:04:53.164809   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0121 16:04:53.164818   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0121 16:04:53.164827   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0121 16:04:53.164931   64363 vm_assets.go:89] NewFileAsset: /root/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0121 16:04:53.166900   64363 exec_runner.go:43] Run: openssl version
I0121 16:04:53.171199   64363 exec_runner.go:43] Run: sudo test -f /etc/ssl/certs/minikubeCA.pem
I0121 16:04:53.192112   64363 exec_runner.go:43] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0121 16:04:53.201434   64363 exec_runner.go:43] Run: sudo test -f /etc/ssl/certs/b5213941.0
πŸš€  Launching Kubernetes ... 
I0121 16:04:53.217716   64363 exec_runner.go:43] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0121 16:04:53.234965   64363 kubeadm.go:477] restartCluster start
I0121 16:04:53.235036   64363 exec_runner.go:43] Run: sudo test -d /data/minikube
I0121 16:04:53.251299   64363 kubeadm.go:220] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
stdout:

stderr:
I0121 16:04:53.251364   64363 exec_runner.go:43] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0121 16:04:53.450055   64363 exec_runner.go:43] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0121 16:04:53.917008   64363 exec_runner.go:43] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0121 16:04:53.982231   64363 exec_runner.go:43] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0121 16:04:54.037241   64363 kubeadm.go:396] waiting for apiserver healthz status ...
I0121 16:10:51.172125   64363 kubeadm.go:481] restartCluster took 4m0.804288875s
I0121 16:10:51.172985   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_kube-apiserver --format="{{.ID}}"
I0121 16:10:51.196834   64363 logs.go:178] 0 containers: []
W0121 16:10:51.196878   64363 logs.go:180] No container was found matching "kube-apiserver"
I0121 16:10:51.196922   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_coredns --format="{{.ID}}"
I0121 16:10:51.213250   64363 logs.go:178] 0 containers: []
W0121 16:10:51.213282   64363 logs.go:180] No container was found matching "coredns"
I0121 16:10:51.213325   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_kube-scheduler --format="{{.ID}}"
I0121 16:10:51.227978   64363 logs.go:178] 0 containers: []
W0121 16:10:51.228015   64363 logs.go:180] No container was found matching "kube-scheduler"
I0121 16:10:51.228056   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_kube-proxy --format="{{.ID}}"
I0121 16:10:51.242864   64363 logs.go:178] 0 containers: []
W0121 16:10:51.242905   64363 logs.go:180] No container was found matching "kube-proxy"
I0121 16:10:51.242944   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_kube-addon-manager --format="{{.ID}}"
I0121 16:10:51.257208   64363 logs.go:178] 0 containers: []
W0121 16:10:51.257252   64363 logs.go:180] No container was found matching "kube-addon-manager"
I0121 16:10:51.257284   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format="{{.ID}}"
I0121 16:10:51.272797   64363 logs.go:178] 0 containers: []
W0121 16:10:51.272819   64363 logs.go:180] No container was found matching "kubernetes-dashboard"
I0121 16:10:51.272873   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_storage-provisioner --format="{{.ID}}"
I0121 16:10:51.288758   64363 logs.go:178] 0 containers: []
W0121 16:10:51.288799   64363 logs.go:180] No container was found matching "storage-provisioner"
I0121 16:10:51.288833   64363 exec_runner.go:43] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format="{{.ID}}"
I0121 16:10:51.304109   64363 logs.go:178] 0 containers: []
W0121 16:10:51.304150   64363 logs.go:180] No container was found matching "kube-controller-manager"
I0121 16:10:51.304167   64363 logs.go:92] Gathering logs for container status ...
I0121 16:10:51.304221   64363 exec_runner.go:43] Run: /bin/bash -c "sudo crictl ps -a || sudo docker ps -a"
I0121 16:11:01.381558   64363 exec_runner.go:72] Completed: /bin/bash -c "sudo crictl ps -a || sudo docker ps -a": (10.077317083s)
I0121 16:11:01.381689   64363 logs.go:92] Gathering logs for kubelet ...
I0121 16:11:01.381723   64363 exec_runner.go:43] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0121 16:11:01.434331   64363 logs.go:92] Gathering logs for dmesg ...
I0121 16:11:01.434380   64363 exec_runner.go:43] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0121 16:11:01.451932   64363 logs.go:92] Gathering logs for Docker ...
I0121 16:11:01.451975   64363 exec_runner.go:43] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
W0121 16:11:01.470173   64363 exit.go:101] Error starting cluster: apiserver healthz: apiserver healthz never reported healthy

πŸ’£  Error starting cluster: apiserver healthz: apiserver healthz never reported healthy

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

@vgudavar
Copy link

Deleting the cluster and recreating worked for me. (minikube delete)

@tstromberg
Copy link
Contributor

I'm closing this issue as it hasn't seen activity in awhile, and it's unclear if this issue still exists. If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it.

Thank you for opening the issue!

@juno-yu
Copy link

juno-yu commented May 9, 2020

It seems still there

πŸ˜„  minikube v1.9.2 on Darwin 10.15.4
✨  Using the hyperkit driver based on existing profile
πŸ‘  Starting control plane node m01 in cluster minikube
πŸ”„  Restarting existing hyperkit VM for "minikube" ...
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...

🀦  Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared




πŸ’₯  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0509 04:59:15.029527   12118 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0509 04:59:17.157287   12118 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0509 04:59:17.160293   12118 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

docker ps showing docker service is running though

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests