Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube service not working with Docker driver on Windows 10 Pro #7644

Closed
ps-feng opened this issue Apr 13, 2020 · 22 comments
Closed

minikube service not working with Docker driver on Windows 10 Pro #7644

ps-feng opened this issue Apr 13, 2020 · 22 comments
Assignees
Labels
co/docker-driver Issues related to kubernetes in container co/service issues related to the service feature kind/bug Categorizes issue or PR as related to a bug. os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@ps-feng
Copy link

ps-feng commented Apr 13, 2020

Steps to reproduce the issue:
On Windows 10 Pro Version 1909 and Docker v19.03.8, following the Hello-Minikube tutorial

  1. minikube start (using Docker driver)
  2. kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
  3. kubectl expose deployment hello-node --type=LoadBalancer --port=8080
  4. minikube service hello-node --alsologtostderr

And then the browser opens but tries to connect without success.

Full output of minikube service hello-node --alsologtostderr:

$ minikube service hello-node --alsologtostderr
I0413 14:08:14.420177    4684 mustload.go:63] Loading cluster: minikube
I0413 14:08:14.422172    4684 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0413 14:08:14.555177    4684 host.go:65] Checking if "minikube" exists ...
I0413 14:08:14.687175    4684 api_server.go:144] Checking apiserver status ...
I0413 14:08:14.730177    4684 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0413 14:08:14.951173    4684 kic_runner.go:91] Run: sudo egrep ^[0-9]+:freezer: /proc/1663/cgroup
I0413 14:08:15.131180    4684 api_server.go:160] apiserver freezer: "7:freezer:/docker/2a3d6d48b89667e71c2c9e11da40d8ce5d73001ff5847e793ad112413972ea0f/kubepods/burstable/pod4d841289a1476ffa02b335dc90866dfc/93c876fd4d655af9743a6254f3980f15b07ca925056c95f39245436f1ce8613b"
I0413 14:08:15.175183    4684 kic_runner.go:91] Run: sudo cat /sys/fs/cgroup/freezer/docker/2a3d6d48b89667e71c2c9e11da40d8ce5d73001ff5847e793ad112413972ea0f/kubepods/burstable/pod4d841289a1476ffa02b335dc90866dfc/93c876fd4d655af9743a6254f3980f15b07ca925056c95f39245436f1ce8613b/freezer.state
I0413 14:08:15.360175    4684 api_server.go:174] freezer state: "THAWED"
I0413 14:08:15.360175    4684 api_server.go:184] Checking apiserver healthz at https://127.0.0.1:32768/healthz ...
I0413 14:08:15.398174    4684 service.go:244] Found service: &Service{ObjectMeta:{hello-node  default /api/v1/namespaces/default/services/hello-node 55e7d931-6ab3-4728-8645-d1ee80dd2160 16657 0 2020-04-13 14:08:05 +0200 CEST <nil> <nil> map[app:hello-node] map[] [] []  [{kubectl.exe Update v1 2020-04-13 14:08:05 +0200 CEST FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 101 120 116 101 114 110 97 108 84 114 97 102 102 105 99 80 111 108 105 99 121 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:31376,},},Selector:map[string]string{app: hello-node,},ClusterIP:10.111.191.226,Type:LoadBalancer,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
I0413 14:08:15.417177    4684 service.go:244] Found service: &Service{ObjectMeta:{hello-node  default /api/v1/namespaces/default/services/hello-node 55e7d931-6ab3-4728-8645-d1ee80dd2160 16657 0 2020-04-13 14:08:05 +0200 CEST <nil> <nil> map[app:hello-node] map[] [] []  [{kubectl.exe Update v1 2020-04-13 14:08:05 +0200 CEST FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 101 120 116 101 114 110 97 108 84 114 97 102 102 105 99 80 111 108 105 99 121 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:31376,},},Selector:map[string]string{app: hello-node,},ClusterIP:10.111.191.226,Type:LoadBalancer,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
I0413 14:08:15.421170    4684 host.go:65] Checking if "minikube" exists ...
|-----------|------------|-------------|-------------------------|
| NAMESPACE |    NAME    | TARGET PORT |           URL           |
|-----------|------------|-------------|-------------------------|
| default   | hello-node |        8080 | http://172.17.0.5:31376 |
|-----------|------------|-------------|-------------------------|
* Opening service default/hello-node in default browser...

Full output of minikube start command used:

* minikube v1.9.2 en Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the docker driver based on existing profile
* Starting control plane node m01 in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
* Preparando Kubernetes v1.18.0 en Docker 19.03.2...
  - kubeadm.pod-network-cidr=10.244.0.0/16
E0413 13:11:08.598650    4304 kubeadm.go:331] Overriding stale ClientConfig host https://127.0.0.1:32783 with https://127.0.0.1:32768
* Enabling addons: dashboard, default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"

Full output of minikube logs command:

* ==> Docker <==
* -- Logs begin at Mon 2020-04-13 11:10:56 UTC, end at Mon 2020-04-13 12:01:55 UTC. --
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755093679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755106879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755121479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755134679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755148179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755161279Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755200979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755217579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755231579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755244179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755381880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755422380Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755458080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.755469180Z" level=info msg="containerd successfully booted in 0.031621s"
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.760496686Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005ab60, READY" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.764349391Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.764385591Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.764405791Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] }" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.764418991Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.764463191Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000673390, CONNECTING" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.764791291Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000673390, READY" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.765356492Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.765375892Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.765389892Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] }" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.765399792Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.765436492Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006738d0, CONNECTING" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.765710492Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006738d0, READY" module=grpc
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.768229596Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
* Apr 13 11:11:02 minikube dockerd[352]: time="2020-04-13T11:11:02.780162410Z" level=info msg="Loading containers: start."
* Apr 13 11:11:03 minikube dockerd[352]: time="2020-04-13T11:11:03.216936352Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Apr 13 11:11:03 minikube dockerd[352]: time="2020-04-13T11:11:03.340525705Z" level=info msg="Loading containers: done."
* Apr 13 11:11:03 minikube dockerd[352]: time="2020-04-13T11:11:03.360670330Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
* Apr 13 11:11:03 minikube dockerd[352]: time="2020-04-13T11:11:03.360775530Z" level=info msg="Daemon has completed initialization"
* Apr 13 11:11:03 minikube systemd[1]: Started Docker Application Container Engine.
* Apr 13 11:11:03 minikube dockerd[352]: time="2020-04-13T11:11:03.384423259Z" level=info msg="API listen on /var/run/docker.sock"
* Apr 13 11:11:03 minikube dockerd[352]: time="2020-04-13T11:11:03.384594660Z" level=info msg="API listen on [::]:2376"
* Apr 13 11:11:08 minikube dockerd[352]: time="2020-04-13T11:11:08.776555824Z" level=info msg="shim containerd-shim started" address=/containerd-shim/64405b3636e9836336c89603ca903b9204051a2d79f1c50595de47081245c407.sock debug=false pid=1334
* Apr 13 11:11:08 minikube dockerd[352]: time="2020-04-13T11:11:08.787113424Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e7c2cced9d885ad53dcd8738feac9490558947d2f842a6f47947de6584862ef2.sock debug=false pid=1347
* Apr 13 11:11:08 minikube dockerd[352]: time="2020-04-13T11:11:08.792797924Z" level=info msg="shim containerd-shim started" address=/containerd-shim/63436009e26bc56648b745fd184947d067af078981d9b63cd66e97c127823557.sock debug=false pid=1351
* Apr 13 11:11:08 minikube dockerd[352]: time="2020-04-13T11:11:08.860290223Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1bef77e224049484eccd4a80c86a7f5a96177d96b8669b4a7593e8440bd1f1d0.sock debug=false pid=1423
* Apr 13 11:11:09 minikube dockerd[352]: time="2020-04-13T11:11:09.111503520Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4246f0d09d678ef43ebe8c4964f027bb3826cbbb0f56dddd63ce38f1e0db6c51.sock debug=false pid=1571
* Apr 13 11:11:09 minikube dockerd[352]: time="2020-04-13T11:11:09.117481619Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3fdacbd7ce689c2702b0e3b7c86023bbf2fd172d24c83edb38799d9c6db51c5e.sock debug=false pid=1577
* Apr 13 11:11:09 minikube dockerd[352]: time="2020-04-13T11:11:09.131962519Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bd7bb59fddd9c6ba0be95a01a64c162203623bcd9032b082bde31b2f9100525a.sock debug=false pid=1599
* Apr 13 11:11:09 minikube dockerd[352]: time="2020-04-13T11:11:09.141812919Z" level=info msg="shim containerd-shim started" address=/containerd-shim/df53501041ff4e4ecfe8d9549da006969ad21107549c931e71c856c86526b1c4.sock debug=false pid=1603
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.429769950Z" level=info msg="shim containerd-shim started" address=/containerd-shim/11d8df201bd398bf31951590320a3a7658fb5f608dfb98c769a7d9396966c7ea.sock debug=false pid=2168
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.430665850Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3cd8354b0f89d89c4e636bd0e336d13048182bbc0dd11fd26238fbc38206f078.sock debug=false pid=2169
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.431030351Z" level=info msg="shim containerd-shim started" address=/containerd-shim/82694d427d67e7b7ec9ca65034f685ab96ab59afc70f5a11da1a35fd08d8191f.sock debug=false pid=2167
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.433934752Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4c3d8a58c90ac620bc121919631a4431e279be5a23281a57d75c5ee15f065f31.sock debug=false pid=2187
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.550115413Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7c399d28f3a6691be538a88423935ab1496be0b603d86974a85bf30773540a1e.sock debug=false pid=2282
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.687817986Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4d1da4c2e2945f6e44e039d664faf47d080a5edd71e821bba444db0ff65328e3.sock debug=false pid=2330
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.728234207Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4ca3150f9994dab8c484d639597dac75a6dbe8c4bfba448347282353e8b28778.sock debug=false pid=2345
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.738406212Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a79f06d7c39ad5d0e4b55c20e936649eea0b7776547817ebe7c8a10d8cb7196d.sock debug=false pid=2347
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.750496718Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a3096e22fe9f1231ac45e8582acae3e68db54cb0363f093c3e48a447519b0634.sock debug=false pid=2363
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.818732254Z" level=info msg="shim containerd-shim started" address=/containerd-shim/eaecd754134ecef06b284f5aab2f5f5f9cce64b1943b6a038e21ee647582b869.sock debug=false pid=2412
* Apr 13 11:11:16 minikube dockerd[352]: time="2020-04-13T11:11:16.849965471Z" level=info msg="shim containerd-shim started" address=/containerd-shim/287e25e539ffdc58349f1fcefcdb7a7a45a53493af370dd5de9c29e853ccdec2.sock debug=false pid=2434
* Apr 13 11:11:17 minikube dockerd[352]: time="2020-04-13T11:11:17.174190141Z" level=info msg="shim containerd-shim started" address=/containerd-shim/cf266d309a89763d31c8c297fca8306c3807b377bf1f3d83ea9196026da774ad.sock debug=false pid=2538
* Apr 13 11:11:17 minikube dockerd[352]: time="2020-04-13T11:11:17.421767171Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e984a0a2fb0571ce322233c2aec6efb46c9130aa098be310e8f4bbaf27e84dc7.sock debug=false pid=2626
* Apr 13 11:11:18 minikube dockerd[352]: time="2020-04-13T11:11:18.024428984Z" level=info msg="shim containerd-shim started" address=/containerd-shim/abb4dc4dc211dc8479c8e21459917ccd8b29ca13dcc63173fd393d55b9020426.sock debug=false pid=2729
* Apr 13 11:11:18 minikube dockerd[352]: time="2020-04-13T11:11:18.396988810Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7605fac4735cf695c570e37db2842b347eeb8b2b6db74d85b63be0bad84dc3e5.sock debug=false pid=2814
* Apr 13 11:11:18 minikube dockerd[352]: time="2020-04-13T11:11:18.967945703Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bc083b7e84dc49e80da68b6424c96254a2345d998e68d4e2ae8210c5b85d16ce.sock debug=false pid=2854
* 
* ==> container status <==
* CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                        ATTEMPT             POD ID
* 899630c968ed4       gcr.io/hello-minikube-zero-install/hello-node@sha256:9cf82733f7278ae7ae899d432f8d3b3bb0fcb54e673c67496a9f76bb58f30a1c   50 minutes ago      Running             hello-node                  2                   ce0ad4a571ad5
* 47b5bd3229444       3b08661dc379d                                                                                                           50 minutes ago      Running             dashboard-metrics-scraper   2                   bf4599675ea68
* 8d06583915d35       cdc71b5a8a0ee                                                                                                           50 minutes ago      Running             kubernetes-dashboard        2                   db4fb651f3231
* 3b3a25a320f64       67da37a9a360e                                                                                                           50 minutes ago      Running             coredns                     2                   78bde8e07f0e5
* 82f18b99ba1a0       67da37a9a360e                                                                                                           50 minutes ago      Running             coredns                     2                   946cad92987af
* 257686d800694       43940c34f24f3                                                                                                           50 minutes ago      Running             kube-proxy                  2                   76ab05f6d59a3
* fa227d6f5c3f7       4689081edb103                                                                                                           50 minutes ago      Running             storage-provisioner         4                   c1390fb3b62e6
* b039f7ce72516       aa67fec7d7ef7                                                                                                           50 minutes ago      Running             kindnet-cni                 2                   61144308a6bba
* 7b766ce32ae4a       d3e55153f52fb                                                                                                           50 minutes ago      Running             kube-controller-manager     2                   71099ebc2c9af
* 93c876fd4d655       74060cea7f704                                                                                                           50 minutes ago      Running             kube-apiserver              2                   22690a50acc1c
* d4b4fb3793863       303ce5db0e90d                                                                                                           50 minutes ago      Running             etcd                        2                   e7af5afc35145
* 5aa36e61bfeda       a31f78c7c8ce1                                                                                                           50 minutes ago      Running             kube-scheduler              2                   782b9f33e07e3
* d9aeadf8a2e97       4689081edb103                                                                                                           11 hours ago        Exited              storage-provisioner         3                   043685dead0ca
* 295df3b009a45       gcr.io/hello-minikube-zero-install/hello-node@sha256:9cf82733f7278ae7ae899d432f8d3b3bb0fcb54e673c67496a9f76bb58f30a1c   11 hours ago        Exited              hello-node                  1                   fb7f2ae271961
* df4b2bedbc2f2       3b08661dc379d                                                                                                           11 hours ago        Exited              dashboard-metrics-scraper   1                   9a1531bd3be93
* 89618e54852f0       cdc71b5a8a0ee                                                                                                           11 hours ago        Exited              kubernetes-dashboard        1                   30ea6dfecd42b
* 63fcd44ee0f61       67da37a9a360e                                                                                                           11 hours ago        Exited              coredns                     1                   817c9f9c7434e
* 40ab89b237516       67da37a9a360e                                                                                                           11 hours ago        Exited              coredns                     1                   3195dfc40173f
* 981d47c4ff600       43940c34f24f3                                                                                                           11 hours ago        Exited              kube-proxy                  1                   c04e0bfa8b949
* 5519f91d83a99       aa67fec7d7ef7                                                                                                           11 hours ago        Exited              kindnet-cni                 1                   3f0d85b5f188a
* e4783ba96f5a0       74060cea7f704                                                                                                           11 hours ago        Exited              kube-apiserver              1                   0df4965939b24
* 01ef6f36dd1ce       303ce5db0e90d                                                                                                           11 hours ago        Exited              etcd                        1                   c15c7b429cc41
* e06dbd127d277       a31f78c7c8ce1                                                                                                           11 hours ago        Exited              kube-scheduler              1                   67f3af9ec63e7
* cabc6d45c8de2       d3e55153f52fb                                                                                                           11 hours ago        Exited              kube-controller-manager     1                   ebe2a3cae35a4
* 
* ==> coredns [3b3a25a320f6] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
* [INFO] plugin/ready: Still waiting on: "kubernetes"
* [INFO] plugin/ready: Still waiting on: "kubernetes"
* I0413 11:11:38.750419       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-13 11:11:17.747511938 +0000 UTC m=+0.039288600) (total time: 21.002835059s):
* Trace[2019727887]: [21.002835059s] [21.002835059s] END
* E0413 11:11:38.750459       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0413 11:11:38.750477       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-13 11:11:17.747755438 +0000 UTC m=+0.039532200) (total time: 21.002609059s):
* Trace[1427131847]: [21.002609059s] [21.002609059s] END
* E0413 11:11:38.750482       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0413 11:11:38.751141       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-13 11:11:17.746426638 +0000 UTC m=+0.038203300) (total time: 21.004694859s):
* Trace[939984059]: [21.004694859s] [21.004694859s] END
* E0413 11:11:38.751173       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* 
* ==> coredns [40ab89b23751] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
* [INFO] SIGTERM: Shutting down servers then terminating
* [INFO] plugin/health: Going into lameduck mode for 5s
* 
* ==> coredns [63fcd44ee0f6] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
* [INFO] SIGTERM: Shutting down servers then terminating
* [INFO] plugin/health: Going into lameduck mode for 5s
* 
* ==> coredns [82f18b99ba1a] <==
* I0413 11:11:38.718549       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-13 11:11:17.708604239 +0000 UTC m=+0.203287225) (total time: 21.009848255s):
* Trace[2019727887]: [21.009848255s] [21.009848255s] END
* E0413 11:11:38.718570       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0413 11:11:38.718584       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-13 11:11:17.714168339 +0000 UTC m=+0.208851425) (total time: 21.004399955s):
* Trace[939984059]: [21.004399955s] [21.004399955s] END
* E0413 11:11:38.718595       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0413 11:11:38.718589       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-13 11:11:17.708528639 +0000 UTC m=+0.203211725) (total time: 21.009956455s):
* Trace[1427131847]: [21.009956455s] [21.009956455s] END
* E0413 11:11:38.718607       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* 
* ==> describe nodes <==
* Name:               minikube
* Roles:              master
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=minikube
*                     kubernetes.io/os=linux
*                     minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393
*                     minikube.k8s.io/name=minikube
*                     minikube.k8s.io/updated_at=2020_04_13T01_43_39_0700
*                     minikube.k8s.io/version=v1.9.2
*                     node-role.kubernetes.io/master=
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Sun, 12 Apr 2020 23:43:36 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  minikube
*   AcquireTime:     <unset>
*   RenewTime:       Mon, 13 Apr 2020 12:01:47 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Mon, 13 Apr 2020 12:01:24 +0000   Sun, 12 Apr 2020 23:43:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Mon, 13 Apr 2020 12:01:24 +0000   Sun, 12 Apr 2020 23:43:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Mon, 13 Apr 2020 12:01:24 +0000   Sun, 12 Apr 2020 23:43:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Mon, 13 Apr 2020 12:01:24 +0000   Sun, 12 Apr 2020 23:43:49 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  172.17.0.5
*   Hostname:    minikube
* Capacity:
*   cpu:                2
*   ephemeral-storage:  61664044Ki
*   hugepages-2Mi:      0
*   memory:             4033036Ki
*   pods:               110
* Allocatable:
*   cpu:                2
*   ephemeral-storage:  61664044Ki
*   hugepages-2Mi:      0
*   memory:             4033036Ki
*   pods:               110
* System Info:
*   Machine ID:                 58d9f7fb995a403782db19b36d96f2dc
*   System UUID:                44186434-f374-4235-946d-dcef3385adf8
*   Boot ID:                    d86f6092-6f15-447b-ac27-ec61122dcb0d
*   Kernel Version:             4.19.76-linuxkit
*   OS Image:                   Ubuntu 19.10
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://19.3.2
*   Kubelet Version:            v1.18.0
*   Kube-Proxy Version:         v1.18.0
* PodCIDR:                      10.244.0.0/24
* PodCIDRs:                     10.244.0.0/24
* Non-terminated Pods:          (12 in total)
*   Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
*   default                     hello-node-677b9cfc6b-w8vq4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11h
*   kube-system                 coredns-66bff467f8-89fgq                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12h
*   kube-system                 coredns-66bff467f8-ccp2b                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12h
*   kube-system                 etcd-minikube                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12h
*   kube-system                 kindnet-pqj4q                                 100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      12h
*   kube-system                 kube-apiserver-minikube                       250m (12%)    0 (0%)      0 (0%)           0 (0%)         12h
*   kube-system                 kube-controller-manager-minikube              200m (10%)    0 (0%)      0 (0%)           0 (0%)         12h
*   kube-system                 kube-proxy-d2m88                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12h
*   kube-system                 kube-scheduler-minikube                       100m (5%)     0 (0%)      0 (0%)           0 (0%)         12h
*   kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12h
*   kubernetes-dashboard        dashboard-metrics-scraper-84bfdf55ff-hxlsv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12h
*   kubernetes-dashboard        kubernetes-dashboard-bc446cc64-mvm27          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12h
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests    Limits
*   --------           --------    ------
*   cpu                850m (42%)  100m (5%)
*   memory             190Mi (4%)  390Mi (9%)
*   ephemeral-storage  0 (0%)      0 (0%)
*   hugepages-2Mi      0 (0%)      0 (0%)
* Events:
*   Type     Reason                   Age                From                  Message
*   ----     ------                   ----               ----                  -------
*   Normal   NodeHasSufficientMemory  12h (x4 over 12h)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
*   Normal   NodeHasNoDiskPressure    12h (x4 over 12h)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     12h (x4 over 12h)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
*   Normal   NodeAllocatableEnforced  12h                kubelet, minikube     Updated Node Allocatable limit across pods
*   Normal   Starting                 12h                kubelet, minikube     Starting kubelet.
*   Normal   NodeHasSufficientMemory  12h                kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
*   Normal   NodeHasNoDiskPressure    12h                kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     12h                kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
*   Normal   NodeAllocatableEnforced  12h                kubelet, minikube     Updated Node Allocatable limit across pods
*   Normal   NodeReady                12h                kubelet, minikube     Node minikube status is now: NodeReady
*   Warning  readOnlySysFS            12h                kube-proxy, minikube  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 12h                kube-proxy, minikube  Starting kube-proxy.
*   Normal   NodeAllocatableEnforced  11h                kubelet, minikube     Updated Node Allocatable limit across pods
*   Normal   Starting                 11h                kubelet, minikube     Starting kubelet.
*   Normal   NodeHasNoDiskPressure    11h (x8 over 11h)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     11h (x7 over 11h)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
*   Normal   NodeHasSufficientMemory  11h (x8 over 11h)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
*   Warning  readOnlySysFS            11h                kube-proxy, minikube  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 11h                kube-proxy, minikube  Starting kube-proxy.
*   Normal   Starting                 50m                kubelet, minikube     Starting kubelet.
*   Normal   NodeHasSufficientMemory  50m (x8 over 50m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
*   Normal   NodeHasNoDiskPressure    50m (x8 over 50m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     50m (x7 over 50m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
*   Normal   NodeAllocatableEnforced  50m                kubelet, minikube     Updated Node Allocatable limit across pods
*   Warning  readOnlySysFS            50m                kube-proxy, minikube  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 50m                kube-proxy, minikube  Starting kube-proxy.
* 
* ==> dmesg <==
* [Apr13 11:01] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
* [  +0.017040] PCI: Fatal: No config space access function found
* [  +0.067935] PCI: System does not support PCI
* [  +0.225518] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
* [  +0.250852] Unstable clock detected, switching default tracing clock to "global"
*               If you want to keep using the local clock, then add:
*                 "trace_clock=local"
*               on the kernel command line
* [  +0.015029] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* [  +0.006202] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* [  +3.174144] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* [  +0.007913] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* 
* ==> etcd [01ef6f36dd1c] <==
* [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
* 2020-04-13 00:42:57.106135 I | etcdmain: etcd Version: 3.4.3
* 2020-04-13 00:42:57.106180 I | etcdmain: Git SHA: 3cf2f69b5
* 2020-04-13 00:42:57.106184 I | etcdmain: Go Version: go1.12.12
* 2020-04-13 00:42:57.106187 I | etcdmain: Go OS/Arch: linux/amd64
* 2020-04-13 00:42:57.109154 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
* 2020-04-13 00:42:57.111189 N | etcdmain: the server is already initialized as member before, starting as etcd member...
* [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
* 2020-04-13 00:42:57.121843 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-04-13 00:42:57.205156 I | embed: name = minikube
* 2020-04-13 00:42:57.205184 I | embed: data dir = /var/lib/minikube/etcd
* 2020-04-13 00:42:57.205189 I | embed: member dir = /var/lib/minikube/etcd/member
* 2020-04-13 00:42:57.211152 I | embed: heartbeat = 100ms
* 2020-04-13 00:42:57.211158 I | embed: election = 1000ms
* 2020-04-13 00:42:57.216135 I | embed: snapshot count = 10000
* 2020-04-13 00:42:57.216155 I | embed: advertise client URLs = https://172.17.0.5:2379
* 2020-04-13 00:42:57.216216 I | embed: initial advertise peer URLs = https://172.17.0.5:2380
* 2020-04-13 00:42:57.219602 I | embed: initial cluster = 
* 2020-04-13 00:42:57.535401 I | etcdserver: restarting member 952f31ff200093ba in cluster 5af0857ece1ce0e5 at commit index 9270
* raft2020/04/13 00:42:57 INFO: 952f31ff200093ba switched to configuration voters=()
* raft2020/04/13 00:42:57 INFO: 952f31ff200093ba became follower at term 2
* raft2020/04/13 00:42:57 INFO: newRaft 952f31ff200093ba [peers: [], term: 2, commit: 9270, applied: 0, lastindex: 9270, lastterm: 2]
* 2020-04-13 00:42:57.540059 I | mvcc: restore compact to 7162
* 2020-04-13 00:42:57.542282 W | auth: simple token is not cryptographically signed
* 2020-04-13 00:42:57.544693 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
* 2020-04-13 00:42:57.547049 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-04-13 00:42:57.547250 I | embed: listening for metrics on http://127.0.0.1:2381
* 2020-04-13 00:42:57.547474 I | embed: listening for peers on 172.17.0.5:2380
* raft2020/04/13 00:42:57 INFO: 952f31ff200093ba switched to configuration voters=(10749865807379993530)
* 2020-04-13 00:42:57.547818 I | etcdserver/membership: added member 952f31ff200093ba [https://172.17.0.5:2380] to cluster 5af0857ece1ce0e5
* 2020-04-13 00:42:57.547955 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-04-13 00:42:57.548102 I | etcdserver/api: enabled capabilities for version 3.4
* raft2020/04/13 00:42:59 INFO: 952f31ff200093ba is starting a new election at term 2
* raft2020/04/13 00:42:59 INFO: 952f31ff200093ba became candidate at term 3
* raft2020/04/13 00:42:59 INFO: 952f31ff200093ba received MsgVoteResp from 952f31ff200093ba at term 3
* raft2020/04/13 00:42:59 INFO: 952f31ff200093ba became leader at term 3
* raft2020/04/13 00:42:59 INFO: raft.node: 952f31ff200093ba elected leader 952f31ff200093ba at term 3
* 2020-04-13 00:42:59.142459 I | embed: ready to serve client requests
* 2020-04-13 00:42:59.142566 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.5:2379]} to cluster 5af0857ece1ce0e5
* 2020-04-13 00:42:59.143584 I | embed: serving client requests on 172.17.0.5:2379
* 2020-04-13 00:42:59.146980 I | embed: ready to serve client requests
* 2020-04-13 00:42:59.147993 I | embed: serving client requests on 127.0.0.1:2379
* 2020-04-13 00:43:02.511502 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:101316" took too long (117.372605ms) to execute
* 2020-04-13 00:43:02.512142 W | etcdserver: read-only range request "key:\"/registry/csinodes/minikube\" " with result "range_response_count:1 size:452" took too long (118.094905ms) to execute
* 2020-04-13 00:43:02.513360 W | etcdserver: read-only range request "key:\"/registry/minions/minikube\" " with result "range_response_count:1 size:5489" took too long (115.957805ms) to execute
* 2020-04-13 00:47:06.218064 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)
* 2020-04-13 00:47:06.223478 I | etcdserver: saved snapshot at index 10001
* 2020-04-13 00:47:06.224240 I | etcdserver: compacted raft log at 5001
* 2020-04-13 00:48:51.933193 N | pkg/osutil: received terminated signal, shutting down...
* WARNING: 2020/04/13 00:48:51 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* 2020-04-13 00:48:51.969540 I | etcdserver: skipped leadership transfer for single voting member cluster
* 
* ==> etcd [d4b4fb379386] <==
* 2020-04-13 11:11:09.440335 I | etcdmain: Git SHA: 3cf2f69b5
* 2020-04-13 11:11:09.440341 I | etcdmain: Go Version: go1.12.12
* 2020-04-13 11:11:09.440347 I | etcdmain: Go OS/Arch: linux/amd64
* 2020-04-13 11:11:09.440353 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
* 2020-04-13 11:11:09.440417 N | etcdmain: the server is already initialized as member before, starting as etcd member...
* [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
* 2020-04-13 11:11:09.440458 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-04-13 11:11:09.458273 I | embed: name = minikube
* 2020-04-13 11:11:09.458292 I | embed: data dir = /var/lib/minikube/etcd
* 2020-04-13 11:11:09.458300 I | embed: member dir = /var/lib/minikube/etcd/member
* 2020-04-13 11:11:09.458306 I | embed: heartbeat = 100ms
* 2020-04-13 11:11:09.458311 I | embed: election = 1000ms
* 2020-04-13 11:11:09.458317 I | embed: snapshot count = 10000
* 2020-04-13 11:11:09.458331 I | embed: advertise client URLs = https://172.17.0.5:2379
* 2020-04-13 11:11:09.458338 I | embed: initial advertise peer URLs = https://172.17.0.5:2380
* 2020-04-13 11:11:09.458347 I | embed: initial cluster = 
* 2020-04-13 11:11:09.488998 I | etcdserver: recovered store from snapshot at index 10001
* 2020-04-13 11:11:09.500437 I | mvcc: restore compact to 7162
* 2020-04-13 11:11:09.660441 I | etcdserver: restarting member 952f31ff200093ba in cluster 5af0857ece1ce0e5 at commit index 10265
* raft2020/04/13 11:11:09 INFO: 952f31ff200093ba switched to configuration voters=(10749865807379993530)
* raft2020/04/13 11:11:09 INFO: 952f31ff200093ba became follower at term 3
* raft2020/04/13 11:11:09 INFO: newRaft 952f31ff200093ba [peers: [952f31ff200093ba], term: 3, commit: 10265, applied: 10001, lastindex: 10265, lastterm: 3]
* 2020-04-13 11:11:09.663699 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-04-13 11:11:09.663791 I | etcdserver/membership: added member 952f31ff200093ba [https://172.17.0.5:2380] to cluster 5af0857ece1ce0e5 from store
* 2020-04-13 11:11:09.663899 I | etcdserver/membership: set the cluster version to 3.4 from store
* 2020-04-13 11:11:09.669824 I | mvcc: restore compact to 7162
* 2020-04-13 11:11:09.688533 W | auth: simple token is not cryptographically signed
* 2020-04-13 11:11:09.695322 I | etcdserver: starting server... [version: 3.4.3, cluster version: 3.4]
* 2020-04-13 11:11:09.700820 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-04-13 11:11:09.701084 I | embed: listening for metrics on http://127.0.0.1:2381
* 2020-04-13 11:11:09.701542 I | etcdserver: 952f31ff200093ba as single-node; fast-forwarding 9 ticks (election ticks 10)
* 2020-04-13 11:11:09.702665 I | embed: listening for peers on 172.17.0.5:2380
* raft2020/04/13 11:11:10 INFO: 952f31ff200093ba is starting a new election at term 3
* raft2020/04/13 11:11:10 INFO: 952f31ff200093ba became candidate at term 4
* raft2020/04/13 11:11:10 INFO: 952f31ff200093ba received MsgVoteResp from 952f31ff200093ba at term 4
* raft2020/04/13 11:11:10 INFO: 952f31ff200093ba became leader at term 4
* raft2020/04/13 11:11:10 INFO: raft.node: 952f31ff200093ba elected leader 952f31ff200093ba at term 4
* 2020-04-13 11:11:10.879825 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.5:2379]} to cluster 5af0857ece1ce0e5
* 2020-04-13 11:11:10.904994 I | embed: ready to serve client requests
* 2020-04-13 11:11:11.004646 I | embed: ready to serve client requests
* 2020-04-13 11:11:11.460114 I | embed: serving client requests on 172.17.0.5:2379
* 2020-04-13 11:11:11.827404 I | embed: serving client requests on 127.0.0.1:2379
* 2020-04-13 11:21:12.040777 I | mvcc: store.index: compact 9826
* 2020-04-13 11:21:12.083947 I | mvcc: finished scheduled compaction at 9826 (took 42.773501ms)
* 2020-04-13 11:26:12.049670 I | mvcc: store.index: compact 10484
* 2020-04-13 11:26:12.067703 I | mvcc: finished scheduled compaction at 10484 (took 16.7343ms)
* 2020-04-13 11:31:12.058193 I | mvcc: store.index: compact 11138
* 2020-04-13 11:31:12.071901 I | mvcc: finished scheduled compaction at 11138 (took 13.3892ms)
* 2020-04-13 11:36:12.064318 I | mvcc: store.index: compact 11796
* 2020-04-13 11:36:12.077944 I | mvcc: finished scheduled compaction at 11796 (took 13.1385ms)
* 2020-04-13 11:41:12.070450 I | mvcc: store.index: compact 12454
* 2020-04-13 11:41:12.087404 I | mvcc: finished scheduled compaction at 12454 (took 16.4958ms)
* 2020-04-13 11:46:12.077194 I | mvcc: store.index: compact 13112
* 2020-04-13 11:46:12.090660 I | mvcc: finished scheduled compaction at 13112 (took 13.0898ms)
* 2020-04-13 11:51:12.082326 I | mvcc: store.index: compact 13770
* 2020-04-13 11:51:12.095886 I | mvcc: finished scheduled compaction at 13770 (took 13.058799ms)
* 2020-04-13 11:56:12.087368 I | mvcc: store.index: compact 14428
* 2020-04-13 11:56:12.100949 I | mvcc: finished scheduled compaction at 14428 (took 13.2205ms)
* 2020-04-13 12:01:12.096036 I | mvcc: store.index: compact 15086
* 2020-04-13 12:01:12.113363 I | mvcc: finished scheduled compaction at 15086 (took 17ms)
* 
* ==> kernel <==
*  12:01:57 up  1:00,  0 users,  load average: 0.33, 0.18, 0.20
* Linux minikube 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 19.10"
* 
* ==> kube-apiserver [93c876fd4d65] <==
* W0413 11:11:13.353948       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
* W0413 11:11:13.376369       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
* W0413 11:11:13.380538       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
* W0413 11:11:13.396554       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
* W0413 11:11:13.456235       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
* W0413 11:11:13.456267       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
* I0413 11:11:13.475105       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
* I0413 11:11:13.475123       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
* I0413 11:11:13.476681       1 client.go:361] parsed scheme: "endpoint"
* I0413 11:11:13.476699       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
* I0413 11:11:13.490323       1 client.go:361] parsed scheme: "endpoint"
* I0413 11:11:13.490366       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
* I0413 11:11:15.710748       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0413 11:11:15.710942       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0413 11:11:15.711215       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
* I0413 11:11:15.711236       1 secure_serving.go:178] Serving securely on [::]:8443
* I0413 11:11:15.711349       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
* I0413 11:11:15.711356       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
* I0413 11:11:15.711261       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0413 11:11:15.714127       1 autoregister_controller.go:141] Starting autoregister controller
* I0413 11:11:15.714344       1 cache.go:32] Waiting for caches to sync for autoregister controller
* I0413 11:11:15.714494       1 crd_finalizer.go:266] Starting CRDFinalizer
* I0413 11:11:15.715522       1 available_controller.go:387] Starting AvailableConditionController
* I0413 11:11:15.715653       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
* I0413 11:11:15.715746       1 controller.go:81] Starting OpenAPI AggregationController
* I0413 11:11:15.717734       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
* I0413 11:11:15.718238       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
* I0413 11:11:15.740960       1 crdregistration_controller.go:111] Starting crd-autoregister controller
* I0413 11:11:15.740974       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
* I0413 11:11:15.740997       1 controller.go:86] Starting OpenAPI controller
* I0413 11:11:15.741011       1 customresource_discovery_controller.go:209] Starting DiscoveryController
* I0413 11:11:15.741023       1 naming_controller.go:291] Starting NamingConditionController
* I0413 11:11:15.741033       1 establishing_controller.go:76] Starting EstablishingController
* I0413 11:11:15.741044       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
* I0413 11:11:15.741055       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
* I0413 11:11:15.741094       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0413 11:11:15.741114       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* E0413 11:11:15.763817       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.5, ResourceVersion: 0, AdditionalErrorMsg: 
* I0413 11:11:15.819347       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I0413 11:11:15.830838       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
* I0413 11:11:15.832638       1 cache.go:39] Caches are synced for autoregister controller
* I0413 11:11:15.832954       1 cache.go:39] Caches are synced for AvailableConditionController controller
* I0413 11:11:15.842829       1 shared_informer.go:230] Caches are synced for crd-autoregister 
* I0413 11:11:15.858301       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I0413 11:11:16.710656       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I0413 11:11:16.710789       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I0413 11:11:16.719612       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I0413 11:11:17.696965       1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I0413 11:11:17.735739       1 controller.go:606] quota admission added evaluator for: deployments.apps
* I0413 11:11:17.811692       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I0413 11:11:17.837869       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I0413 11:11:17.849973       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* I0413 11:11:31.712438       1 controller.go:606] quota admission added evaluator for: endpoints
* E0413 11:11:48.741639       1 rest.go:534] Address {172.18.0.6  0xc0082a8d30 0xc004f26700} isn't valid (pod ip doesn't match endpoint ip, skipping: 172.18.0.4 vs 172.18.0.6 (kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-hxlsv))
* E0413 11:11:48.741744       1 rest.go:544] Failed to find a valid address, skipping subset: &{[{172.18.0.6  0xc0082a8d30 0xc004f26700}] [] [{ 8000 TCP <nil>}]}
* I0413 11:11:49.520677       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* W0413 11:24:43.089635       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
* W0413 11:31:33.172561       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
* W0413 11:47:29.198245       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
* W0413 11:57:01.302384       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
* 
* ==> kube-apiserver [e4783ba96f5a] <==
* 	/usr/local/go/src/net/http/server.go:2007 +0x44
* k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc007dbb6a0, 0x5147220, 0xc0005bd8f0, 0xc010550900)
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x462
* k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x5147220, 0xc0005bd8f0, 0xc010550900)
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:59 +0x121
* net/http.HandlerFunc.ServeHTTP(0xc007ddc690, 0x5147220, 0xc0005bd8f0, 0xc010550900)
* 	/usr/local/go/src/net/http/server.go:2007 +0x44
* k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x5147220, 0xc0005bd8f0, 0xc010550800)
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x274
* net/http.HandlerFunc.ServeHTTP(0xc007ddc6c0, 0x5147220, 0xc0005bd8f0, 0xc010550800)
* 	/usr/local/go/src/net/http/server.go:2007 +0x44
* k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x513a020, 0xc00e697ce8, 0xc010550700)
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x2ca
* net/http.HandlerFunc.ServeHTTP(0xc007dbb6c0, 0x513a020, 0xc00e697ce8, 0xc010550700)
* 	/usr/local/go/src/net/http/server.go:2007 +0x44
* k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x513a020, 0xc00e697ce8, 0xc010550700)
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:51 +0x13e
* net/http.HandlerFunc.ServeHTTP(0xc007dbb6e0, 0x513a020, 0xc00e697ce8, 0xc010550700)
* 	/usr/local/go/src/net/http/server.go:2007 +0x44
* k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc007ddc6f0, 0x513a020, 0xc00e697ce8, 0xc010550700)
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51
* net/http.serverHandler.ServeHTTP(0xc0091ec620, 0x513a020, 0xc00e697ce8, 0xc010550700)
* 	/usr/local/go/src/net/http/server.go:2802 +0xa4
* net/http.initNPNRequest.ServeHTTP(0x51546a0, 0xc007487500, 0xc00e52c700, 0xc0091ec620, 0x513a020, 0xc00e697ce8, 0xc010550700)
* 	/usr/local/go/src/net/http/server.go:3366 +0x8d
* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc00fa5ef00, 0xc00e697ce8, 0xc010550700, 0xc010563e80)
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2149 +0x9f
* created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders
* 	/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1883 +0x4eb
* I0413 00:43:07.054773       1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I0413 00:43:07.076669       1 controller.go:606] quota admission added evaluator for: deployments.apps
* I0413 00:43:07.121129       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I0413 00:43:07.137441       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I0413 00:43:07.143981       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* I0413 00:43:18.103669       1 controller.go:606] quota admission added evaluator for: endpoints
* E0413 00:43:36.014501       1 rest.go:534] Address {172.18.0.5  0xc009734610 0xc006b74690} isn't valid (pod ip doesn't match endpoint ip, skipping: 172.18.0.6 vs 172.18.0.5 (kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-hxlsv))
* E0413 00:43:36.014550       1 rest.go:544] Failed to find a valid address, skipping subset: &{[{172.18.0.5  0xc009734610 0xc006b74690}] [] [{ 8000 TCP <nil>}]}
* I0413 00:43:36.592923       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I0413 00:48:51.875530       1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0413 00:48:51.875619       1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0413 00:48:51.875629       1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0413 00:48:51.875689       1 controller.go:87] Shutting down OpenAPI AggregationController
* I0413 00:48:51.875832       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController
* I0413 00:48:51.875842       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
* I0413 00:48:51.875857       1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0413 00:48:51.875548       1 controller.go:181] Shutting down kubernetes service endpoint reconciler
* I0413 00:48:51.875988       1 controller.go:123] Shutting down OpenAPI controller
* I0413 00:48:51.876006       1 customresource_discovery_controller.go:220] Shutting down DiscoveryController
* I0413 00:48:51.876020       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
* I0413 00:48:51.876032       1 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
* I0413 00:48:51.876045       1 establishing_controller.go:87] Shutting down EstablishingController
* I0413 00:48:51.876056       1 naming_controller.go:302] Shutting down NamingConditionController
* I0413 00:48:51.876670       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
* I0413 00:48:51.876692       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
* I0413 00:48:51.876706       1 available_controller.go:399] Shutting down AvailableConditionController
* I0413 00:48:51.876726       1 crd_finalizer.go:278] Shutting down CRDFinalizer
* I0413 00:48:51.876738       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
* I0413 00:48:51.876749       1 autoregister_controller.go:165] Shutting down autoregister controller
* I0413 00:48:51.881294       1 secure_serving.go:222] Stopped listening on [::]:8443
* E0413 00:48:51.886743       1 controller.go:184] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused
* 
* ==> kube-controller-manager [7b766ce32ae4] <==
* I0413 11:11:48.363163       1 controllermanager.go:533] Started "persistentvolume-binder"
* I0413 11:11:48.363213       1 pv_controller_base.go:295] Starting persistent volume controller
* I0413 11:11:48.363224       1 shared_informer.go:223] Waiting for caches to sync for persistent volume
* I0413 11:11:48.512965       1 controllermanager.go:533] Started "clusterrole-aggregation"
* I0413 11:11:48.513011       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
* I0413 11:11:48.513017       1 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator
* I0413 11:11:48.663507       1 controllermanager.go:533] Started "podgc"
* I0413 11:11:48.663554       1 gc_controller.go:89] Starting GC controller
* I0413 11:11:48.663560       1 shared_informer.go:223] Waiting for caches to sync for GC
* I0413 11:11:48.812900       1 controllermanager.go:533] Started "csrapproving"
* I0413 11:11:48.812972       1 certificate_controller.go:119] Starting certificate controller "csrapproving"
* I0413 11:11:48.812981       1 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving
* E0413 11:11:48.964038       1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
* W0413 11:11:48.964073       1 controllermanager.go:525] Skipping "service"
* I0413 11:11:48.964700       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
* I0413 11:11:48.995535       1 shared_informer.go:223] Waiting for caches to sync for resource quota
* W0413 11:11:49.016194       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
* I0413 11:11:49.028001       1 shared_informer.go:230] Caches are synced for HPA 
* I0413 11:11:49.044806       1 shared_informer.go:230] Caches are synced for node 
* I0413 11:11:49.044902       1 range_allocator.go:172] Starting range CIDR allocator
* I0413 11:11:49.044937       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
* I0413 11:11:49.044948       1 shared_informer.go:230] Caches are synced for cidrallocator 
* I0413 11:11:49.063371       1 shared_informer.go:230] Caches are synced for persistent volume 
* I0413 11:11:49.063457       1 shared_informer.go:230] Caches are synced for expand 
* I0413 11:11:49.063788       1 shared_informer.go:230] Caches are synced for GC 
* I0413 11:11:49.074354       1 shared_informer.go:230] Caches are synced for TTL 
* I0413 11:11:49.077385       1 shared_informer.go:230] Caches are synced for deployment 
* I0413 11:11:49.081253       1 shared_informer.go:230] Caches are synced for ReplicaSet 
* I0413 11:11:49.082133       1 shared_informer.go:230] Caches are synced for PV protection 
* I0413 11:11:49.091046       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
* I0413 11:11:49.096041       1 shared_informer.go:230] Caches are synced for taint 
* I0413 11:11:49.096102       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
* W0413 11:11:49.096193       1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
* I0413 11:11:49.096251       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
* I0413 11:11:49.096497       1 taint_manager.go:187] Starting NoExecuteTaintManager
* I0413 11:11:49.096721       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"3e271494-1882-4352-a12a-ef9d46f4914c", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
* I0413 11:11:49.097410       1 shared_informer.go:230] Caches are synced for ReplicationController 
* I0413 11:11:49.107035       1 shared_informer.go:230] Caches are synced for PVC protection 
* I0413 11:11:49.183856       1 shared_informer.go:230] Caches are synced for stateful set 
* I0413 11:11:49.212951       1 shared_informer.go:230] Caches are synced for disruption 
* I0413 11:11:49.213032       1 disruption.go:339] Sending events to api server.
* I0413 11:11:49.213223       1 shared_informer.go:230] Caches are synced for daemon sets 
* I0413 11:11:49.270897       1 shared_informer.go:230] Caches are synced for namespace 
* I0413 11:11:49.303301       1 shared_informer.go:230] Caches are synced for service account 
* I0413 11:11:49.390234       1 shared_informer.go:230] Caches are synced for attach detach 
* I0413 11:11:49.470420       1 shared_informer.go:230] Caches are synced for job 
* I0413 11:11:49.496020       1 shared_informer.go:230] Caches are synced for resource quota 
* I0413 11:11:49.513348       1 shared_informer.go:230] Caches are synced for endpoint 
* I0413 11:11:49.513439       1 shared_informer.go:230] Caches are synced for endpoint_slice 
* I0413 11:11:49.543081       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-dns", UID:"c7296991-22b7-42a1-8cd3-16d1e8b933ce", APIVersion:"v1", ResourceVersion:"8367", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
* I0413 11:11:49.544909       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"37f6a5a3-7cc7-4d17-8fc4-1316bb14183a", APIVersion:"v1", ResourceVersion:"8364", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kubernetes-dashboard/kubernetes-dashboard: Operation cannot be fulfilled on endpoints "kubernetes-dashboard": the object has been modified; please apply your changes to the latest version and try again
* I0413 11:11:49.554244       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"hello-node", UID:"6087aecb-17c6-4528-a7a3-164eccb5be2f", APIVersion:"v1", ResourceVersion:"8538", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint default/hello-node: Operation cannot be fulfilled on endpoints "hello-node": the object has been modified; please apply your changes to the latest version and try again
* I0413 11:11:49.556690       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"b7bff663-6c15-4a00-99c3-0d5a4798798a", APIVersion:"v1", ResourceVersion:"8366", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kubernetes-dashboard/dashboard-metrics-scraper: Operation cannot be fulfilled on endpoints "dashboard-metrics-scraper": the object has been modified; please apply your changes to the latest version and try again
* I0413 11:11:49.564218       1 shared_informer.go:230] Caches are synced for resource quota 
* I0413 11:11:49.568361       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
* I0413 11:11:49.613260       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
* I0413 11:11:49.613260       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
* I0413 11:11:49.661343       1 shared_informer.go:230] Caches are synced for garbage collector 
* I0413 11:11:49.661372       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I0413 11:11:49.665061       1 shared_informer.go:230] Caches are synced for garbage collector 
* 
* ==> kube-controller-manager [cabc6d45c8de] <==
* I0413 00:43:36.824241       1 shared_informer.go:230] Caches are synced for attach detach 
* I0413 00:43:36.874579       1 shared_informer.go:230] Caches are synced for stateful set 
* I0413 00:43:36.912780       1 shared_informer.go:230] Caches are synced for PV protection 
* I0413 00:43:36.913165       1 shared_informer.go:230] Caches are synced for daemon sets 
* I0413 00:43:37.054912       1 shared_informer.go:230] Caches are synced for persistent volume 
* I0413 00:43:37.069436       1 shared_informer.go:230] Caches are synced for expand 
* I0413 00:43:37.118156       1 shared_informer.go:230] Caches are synced for resource quota 
* I0413 00:43:37.118957       1 shared_informer.go:230] Caches are synced for garbage collector 
* I0413 00:43:37.126224       1 shared_informer.go:230] Caches are synced for resource quota 
* I0413 00:43:37.154398       1 shared_informer.go:230] Caches are synced for disruption 
* I0413 00:43:37.154444       1 disruption.go:339] Sending events to api server.
* I0413 00:43:37.203776       1 shared_informer.go:230] Caches are synced for garbage collector 
* I0413 00:43:37.203802       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I0413 00:43:37.205053       1 shared_informer.go:230] Caches are synced for deployment 
* E0413 00:48:51.881222       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://172.17.0.5:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=8383&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881262       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://172.17.0.5:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881290       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: Get https://172.17.0.5:8443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m57s&timeoutSeconds=597&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881315       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://172.17.0.5:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881339       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://172.17.0.5:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m57s&timeoutSeconds=357&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881363       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://172.17.0.5:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881408       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://172.17.0.5:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=8206&timeout=7m50s&timeoutSeconds=470&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881433       1 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://172.17.0.5:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881455       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.17.0.5:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=8537&timeout=8m2s&timeoutSeconds=482&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881479       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://172.17.0.5:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881504       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://172.17.0.5:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=8206&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881528       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://172.17.0.5:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m57s&timeoutSeconds=417&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881555       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CertificateSigningRequest: Get https://172.17.0.5:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m25s&timeoutSeconds=385&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881582       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://172.17.0.5:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=8288&timeout=6m39s&timeoutSeconds=399&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881609       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://172.17.0.5:8443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=8206&timeout=8m54s&timeoutSeconds=534&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881635       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://172.17.0.5:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=9077&timeout=7m50s&timeoutSeconds=470&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881660       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://172.17.0.5:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881683       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://172.17.0.5:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=8382&timeout=9m50s&timeoutSeconds=590&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881708       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://172.17.0.5:8443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=8206&timeout=8m15s&timeoutSeconds=495&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881732       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://172.17.0.5:8443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=8374&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881759       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://172.17.0.5:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=8206&timeout=7m13s&timeoutSeconds=433&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881787       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://172.17.0.5:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881812       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.IngressClass: Get https://172.17.0.5:8443/apis/networking.k8s.io/v1beta1/ingressclasses?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881837       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.17.0.5:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m16s&timeoutSeconds=556&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881862       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodSecurityPolicy: Get https://172.17.0.5:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.882619       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://172.17.0.5:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=8206&timeout=7m5s&timeoutSeconds=425&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.882648       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://172.17.0.5:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=9078&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884019       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://172.17.0.5:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m20s&timeoutSeconds=380&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884056       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://172.17.0.5:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=8206&timeout=7m8s&timeoutSeconds=428&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884081       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://172.17.0.5:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m17s&timeoutSeconds=557&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884104       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://172.17.0.5:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884149       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://172.17.0.5:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m12s&timeoutSeconds=372&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884173       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://172.17.0.5:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=8539&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884195       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://172.17.0.5:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884218       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://172.17.0.5:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884243       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://172.17.0.5:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884267       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://172.17.0.5:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m38s&timeoutSeconds=338&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884290       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://172.17.0.5:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884312       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://172.17.0.5:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884334       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://172.17.0.5:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=8973&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884358       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://172.17.0.5:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m29s&timeoutSeconds=569&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884405       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://172.17.0.5:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884433       1 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://172.17.0.5:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m15s&timeoutSeconds=555&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884456       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://172.17.0.5:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=8259&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884481       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://172.17.0.5:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=8206&timeout=8m24s&timeoutSeconds=504&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.884576       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://172.17.0.5:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* 
* ==> kube-proxy [257686d80069] <==
* W0413 11:11:17.896115       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0413 11:11:17.958034       1 node.go:136] Successfully retrieved node IP: 172.17.0.5
* I0413 11:11:17.958075       1 server_others.go:186] Using iptables Proxier.
* I0413 11:11:17.960795       1 server.go:583] Version: v1.18.0
* I0413 11:11:17.962975       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0413 11:11:17.963033       1 conntrack.go:52] Setting nf_conntrack_max to 131072
* E0413 11:11:17.963389       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
* I0413 11:11:17.964359       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0413 11:11:17.964408       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0413 11:11:17.980369       1 config.go:315] Starting service config controller
* I0413 11:11:17.980389       1 shared_informer.go:223] Waiting for caches to sync for service config
* I0413 11:11:17.991699       1 config.go:133] Starting endpoints config controller
* I0413 11:11:17.991741       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0413 11:11:18.081716       1 shared_informer.go:230] Caches are synced for service config 
* I0413 11:11:18.093099       1 shared_informer.go:230] Caches are synced for endpoints config 
* 
* ==> kube-proxy [981d47c4ff60] <==
* W0413 00:43:04.584174       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0413 00:43:04.599436       1 node.go:136] Successfully retrieved node IP: 172.17.0.5
* I0413 00:43:04.599466       1 server_others.go:186] Using iptables Proxier.
* I0413 00:43:04.599669       1 server.go:583] Version: v1.18.0
* I0413 00:43:04.599975       1 conntrack.go:52] Setting nf_conntrack_max to 131072
* E0413 00:43:04.600263       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
* I0413 00:43:04.600326       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0413 00:43:04.600361       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0413 00:43:04.602737       1 config.go:133] Starting endpoints config controller
* I0413 00:43:04.602777       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0413 00:43:04.602796       1 config.go:315] Starting service config controller
* I0413 00:43:04.602800       1 shared_informer.go:223] Waiting for caches to sync for service config
* I0413 00:43:04.702922       1 shared_informer.go:230] Caches are synced for service config 
* I0413 00:43:04.702987       1 shared_informer.go:230] Caches are synced for endpoints config 
* E0413 00:48:51.883229       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.17.0.5:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=8537&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.883285       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://172.17.0.5:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=9077&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* 
* ==> kube-scheduler [5aa36e61bfed] <==
* I0413 11:11:09.632147       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0413 11:11:09.632369       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0413 11:11:09.902699       1 serving.go:313] Generated self-signed cert in-memory
* W0413 11:11:15.825515       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W0413 11:11:15.825843       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0413 11:11:15.825896       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0413 11:11:15.825966       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0413 11:11:15.872152       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0413 11:11:15.872199       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* W0413 11:11:15.876923       1 authorization.go:47] Authorization is disabled
* W0413 11:11:15.876937       1 authentication.go:40] Authentication is disabled
* I0413 11:11:15.876947       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0413 11:11:15.878866       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0413 11:11:15.879278       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0413 11:11:15.881357       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0413 11:11:15.883103       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0413 11:11:15.981884       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
* I0413 11:11:15.983228       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I0413 11:11:31.718355       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
* 
* ==> kube-scheduler [e06dbd127d27] <==
* I0413 00:42:55.635312       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0413 00:42:55.635374       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0413 00:42:56.704529       1 serving.go:313] Generated self-signed cert in-memory
* W0413 00:43:02.297714       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W0413 00:43:02.297882       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0413 00:43:02.297973       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0413 00:43:02.298048       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0413 00:43:02.381014       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0413 00:43:02.381049       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* W0413 00:43:02.382595       1 authorization.go:47] Authorization is disabled
* W0413 00:43:02.382613       1 authentication.go:40] Authentication is disabled
* I0413 00:43:02.382623       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0413 00:43:02.406286       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0413 00:43:02.406942       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0413 00:43:02.406967       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0413 00:43:02.407011       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0413 00:43:02.507213       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I0413 00:43:02.507298       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
* I0413 00:43:18.112418       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
* E0413 00:48:51.877803       1 reflector.go:380] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://172.17.0.5:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=8206&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881591       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://172.17.0.5:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=8206&timeout=6m10s&timeoutSeconds=370&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881660       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://172.17.0.5:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=8973&timeout=6m6s&timeoutSeconds=366&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881737       1 reflector.go:380] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://172.17.0.5:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=8383&timeoutSeconds=541&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881807       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.17.0.5:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=8537&timeout=9m20s&timeoutSeconds=560&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881900       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://172.17.0.5:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=8206&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.881974       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://172.17.0.5:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=8206&timeout=7m58s&timeoutSeconds=478&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.882034       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://172.17.0.5:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* E0413 00:48:51.882090       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://172.17.0.5:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=8206&timeout=9m48s&timeoutSeconds=588&watch=true: dial tcp 172.17.0.5:8443: connect: connection refused
* 
* ==> kubelet <==
* -- Logs begin at Mon 2020-04-13 11:10:56 UTC, end at Mon 2020-04-13 12:01:59 UTC. --
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.848489     611 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.860348     611 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.868552     611 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.879782     611 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.893285     611 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.901413     611 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.903656     611 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.931780     611 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 10.244.0.0/24
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.934287     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-96dct" (UniqueName: "kubernetes.io/secret/5bbbedf1-b2a5-47fe-a20b-5fd64abb613b-kindnet-token-96dct") pod "kindnet-pqj4q" (UID: "5bbbedf1-b2a5-47fe-a20b-5fd64abb613b")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.934433     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/54799116-7731-49fa-bb67-214d5ffc4556-config-volume") pod "coredns-66bff467f8-ccp2b" (UID: "54799116-7731-49fa-bb67-214d5ffc4556")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.934545     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f5b18bb8-df16-44ec-8004-416737e6dbef-xtables-lock") pod "kube-proxy-d2m88" (UID: "f5b18bb8-df16-44ec-8004-416737e6dbef")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.934673     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-zfnvv" (UniqueName: "kubernetes.io/secret/f5b18bb8-df16-44ec-8004-416737e6dbef-kube-proxy-token-zfnvv") pod "kube-proxy-d2m88" (UID: "f5b18bb8-df16-44ec-8004-416737e6dbef")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.934803     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-28nck" (UniqueName: "kubernetes.io/secret/46a78628-1e28-44fc-9e7d-84051ca0db39-storage-provisioner-token-28nck") pod "storage-provisioner" (UID: "46a78628-1e28-44fc-9e7d-84051ca0db39")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.934910     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-wdt28" (UniqueName: "kubernetes.io/secret/6d0d11ed-593f-4a86-bd49-530c0933e8ee-coredns-token-wdt28") pod "coredns-66bff467f8-89fgq" (UID: "6d0d11ed-593f-4a86-bd49-530c0933e8ee")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.935017     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6d0d11ed-593f-4a86-bd49-530c0933e8ee-config-volume") pod "coredns-66bff467f8-89fgq" (UID: "6d0d11ed-593f-4a86-bd49-530c0933e8ee")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.935126     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/5bbbedf1-b2a5-47fe-a20b-5fd64abb613b-cni-cfg") pod "kindnet-pqj4q" (UID: "5bbbedf1-b2a5-47fe-a20b-5fd64abb613b")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.940550     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/5bbbedf1-b2a5-47fe-a20b-5fd64abb613b-xtables-lock") pod "kindnet-pqj4q" (UID: "5bbbedf1-b2a5-47fe-a20b-5fd64abb613b")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.940588     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f5b18bb8-df16-44ec-8004-416737e6dbef-kube-proxy") pod "kube-proxy-d2m88" (UID: "f5b18bb8-df16-44ec-8004-416737e6dbef")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.940612     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-wdt28" (UniqueName: "kubernetes.io/secret/54799116-7731-49fa-bb67-214d5ffc4556-coredns-token-wdt28") pod "coredns-66bff467f8-ccp2b" (UID: "54799116-7731-49fa-bb67-214d5ffc4556")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.940651     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/5bbbedf1-b2a5-47fe-a20b-5fd64abb613b-lib-modules") pod "kindnet-pqj4q" (UID: "5bbbedf1-b2a5-47fe-a20b-5fd64abb613b")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.940671     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/46a78628-1e28-44fc-9e7d-84051ca0db39-tmp") pod "storage-provisioner" (UID: "46a78628-1e28-44fc-9e7d-84051ca0db39")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.940720     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f5b18bb8-df16-44ec-8004-416737e6dbef-lib-modules") pod "kube-proxy-d2m88" (UID: "f5b18bb8-df16-44ec-8004-416737e6dbef")
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.940915     611 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
* Apr 13 11:11:15 minikube kubelet[611]: I0413 11:11:15.942439     611 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.041160     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-ht4ws" (UniqueName: "kubernetes.io/secret/3c27f860-ab5f-4566-963d-c7ed83f38837-kubernetes-dashboard-token-ht4ws") pod "dashboard-metrics-scraper-84bfdf55ff-hxlsv" (UID: "3c27f860-ab5f-4566-963d-c7ed83f38837")
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.041272     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/e5279a02-3819-4d8f-8945-dfb024c4f49b-tmp-volume") pod "kubernetes-dashboard-bc446cc64-mvm27" (UID: "e5279a02-3819-4d8f-8945-dfb024c4f49b")
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.041296     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-ht4ws" (UniqueName: "kubernetes.io/secret/e5279a02-3819-4d8f-8945-dfb024c4f49b-kubernetes-dashboard-token-ht4ws") pod "kubernetes-dashboard-bc446cc64-mvm27" (UID: "e5279a02-3819-4d8f-8945-dfb024c4f49b")
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.041375     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/3c27f860-ab5f-4566-963d-c7ed83f38837-tmp-volume") pod "dashboard-metrics-scraper-84bfdf55ff-hxlsv" (UID: "3c27f860-ab5f-4566-963d-c7ed83f38837")
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.041399     611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-sjtrf" (UniqueName: "kubernetes.io/secret/fc19142b-14d6-4390-9eb5-d06508ffd5f9-default-token-sjtrf") pod "hello-node-677b9cfc6b-w8vq4" (UID: "fc19142b-14d6-4390-9eb5-d06508ffd5f9")
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.041412     611 reconciler.go:157] Reconciler: start to sync state
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.047116     611 kubelet_node_status.go:112] Node minikube was previously registered
* Apr 13 11:11:16 minikube kubelet[611]: I0413 11:11:16.047276     611 kubelet_node_status.go:73] Successfully registered node minikube
* Apr 13 11:11:17 minikube kubelet[611]: W0413 11:11:17.036347     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-89fgq through plugin: invalid network status for
* Apr 13 11:11:17 minikube kubelet[611]: W0413 11:11:17.040917     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-89fgq through plugin: invalid network status for
* Apr 13 11:11:17 minikube kubelet[611]: W0413 11:11:17.052802     611 pod_container_deletor.go:77] Container "946cad92987af00baa7e67c2957c1a9099e8f37c33aa646ddb04b373a0f5a7f3" not found in pod's containers
* Apr 13 11:11:17 minikube kubelet[611]: W0413 11:11:17.317429     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-ccp2b through plugin: invalid network status for
* Apr 13 11:11:17 minikube kubelet[611]: W0413 11:11:17.772486     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-node-677b9cfc6b-w8vq4 through plugin: invalid network status for
* Apr 13 11:11:17 minikube kubelet[611]: W0413 11:11:17.788122     611 pod_container_deletor.go:77] Container "ce0ad4a571ad59f780a9c23dbe4999ec7d6297e972b7bcac7fd46e58a876a12a" not found in pod's containers
* Apr 13 11:11:17 minikube kubelet[611]: W0413 11:11:17.841907     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-mvm27 through plugin: invalid network status for
* Apr 13 11:11:18 minikube kubelet[611]: W0413 11:11:18.267565     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-hxlsv through plugin: invalid network status for
* Apr 13 11:11:18 minikube kubelet[611]: W0413 11:11:18.274315     611 pod_container_deletor.go:77] Container "bf4599675ea68e954035ca935218f6e9d53d5714b7f2f36459bf61ab305bad96" not found in pod's containers
* Apr 13 11:11:18 minikube kubelet[611]: W0413 11:11:18.324530     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-ccp2b through plugin: invalid network status for
* Apr 13 11:11:18 minikube kubelet[611]: W0413 11:11:18.334399     611 pod_container_deletor.go:77] Container "78bde8e07f0e5159a38f717e1bbda9c9cd268fc303ee921db27a7645ba1530ad" not found in pod's containers
* Apr 13 11:11:18 minikube kubelet[611]: W0413 11:11:18.352392     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-mvm27 through plugin: invalid network status for
* Apr 13 11:11:18 minikube kubelet[611]: E0413 11:11:18.362631     611 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
* Apr 13 11:11:18 minikube kubelet[611]: E0413 11:11:18.363384     611 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
* Apr 13 11:11:18 minikube kubelet[611]: W0413 11:11:18.388523     611 pod_container_deletor.go:77] Container "db4fb651f3231dbf2e77b715b3d5563d7916042b5f05867e50b545ec60c41e9b" not found in pod's containers
* Apr 13 11:11:19 minikube kubelet[611]: W0413 11:11:19.404383     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-node-677b9cfc6b-w8vq4 through plugin: invalid network status for
* Apr 13 11:11:19 minikube kubelet[611]: W0413 11:11:19.414544     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-hxlsv through plugin: invalid network status for
* Apr 13 11:11:19 minikube kubelet[611]: W0413 11:11:19.422003     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-bc446cc64-mvm27 through plugin: invalid network status for
* Apr 13 11:11:19 minikube kubelet[611]: W0413 11:11:19.437199     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-89fgq through plugin: invalid network status for
* Apr 13 11:11:19 minikube kubelet[611]: W0413 11:11:19.459147     611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-ccp2b through plugin: invalid network status for
* Apr 13 11:11:28 minikube kubelet[611]: E0413 11:11:28.380105     611 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
* Apr 13 11:11:28 minikube kubelet[611]: E0413 11:11:28.380159     611 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
* Apr 13 11:11:38 minikube kubelet[611]: E0413 11:11:38.391266     611 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
* Apr 13 11:11:38 minikube kubelet[611]: E0413 11:11:38.391314     611 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
* Apr 13 11:11:48 minikube kubelet[611]: E0413 11:11:48.403879     611 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
* Apr 13 11:11:48 minikube kubelet[611]: E0413 11:11:48.403924     611 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
* Apr 13 11:11:58 minikube kubelet[611]: E0413 11:11:58.418748     611 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
* Apr 13 11:11:58 minikube kubelet[611]: E0413 11:11:58.418791     611 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
* 
* ==> kubernetes-dashboard [89618e54852f] <==
* 2020/04/13 00:43:05 Using namespace: kubernetes-dashboard
* 2020/04/13 00:43:05 Using in-cluster config to connect to apiserver
* 2020/04/13 00:43:05 Using secret token for csrf signing
* 2020/04/13 00:43:05 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2020/04/13 00:43:05 Successful initial request to the apiserver, version: v1.18.0
* 2020/04/13 00:43:05 Generating JWE encryption key
* 2020/04/13 00:43:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
* 2020/04/13 00:43:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
* 2020/04/13 00:43:05 Initializing JWE encryption key from synchronized object
* 2020/04/13 00:43:05 Creating in-cluster Sidecar client
* 2020/04/13 00:43:06 Metric client health check failed: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper/proxy/healthz: stream error: stream ID 7; INTERNAL_ERROR. Retrying in 30 seconds.
* 2020/04/13 00:43:06 Serving insecurely on HTTP port: 9090
* 2020/04/13 00:43:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
* 2020/04/13 00:44:06 Successful request to sidecar
* 
* ==> kubernetes-dashboard [8d06583915d3] <==
* 2020/04/13 11:11:18 Starting overwatch
* 2020/04/13 11:11:18 Using namespace: kubernetes-dashboard
* 2020/04/13 11:11:18 Using in-cluster config to connect to apiserver
* 2020/04/13 11:11:18 Using secret token for csrf signing
* 2020/04/13 11:11:18 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2020/04/13 11:11:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
* 2020/04/13 11:11:18 Successful initial request to the apiserver, version: v1.18.0
* 2020/04/13 11:11:18 Generating JWE encryption key
* 2020/04/13 11:11:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
* 2020/04/13 11:11:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
* 2020/04/13 11:11:18 Initializing JWE encryption key from synchronized object
* 2020/04/13 11:11:18 Creating in-cluster Sidecar client
* 2020/04/13 11:11:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
* 2020/04/13 11:11:18 Serving insecurely on HTTP port: 9090
* 2020/04/13 11:11:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
* 2020/04/13 11:12:18 Successful request to sidecar
* 
* ==> storage-provisioner [d9aeadf8a2e9] <==
* 
* ==> storage-provisioner [fa227d6f5c3f] <==

@priyawadhwa priyawadhwa added kind/bug Categorizes issue or PR as related to a bug. co/service issues related to the service feature os/windows labels Apr 15, 2020
@priyawadhwa
Copy link

Hey @ps-feng thanks for opening this issue.

cc @medyagh is this a known issue with kic?

@medyagh
Copy link
Member

medyagh commented Apr 16, 2020

@ps-feng one of the differnece for docker on windows is, you will have to keep your window open and open the URL in the browser.
(unlike other drivers) that u just hit the URL

do you mind trying to keep the terminal open and hit the URL and tell me if u still have the same issue ?

@medyagh medyagh added co/docker-driver Issues related to kubernetes in container triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Apr 16, 2020
@medyagh
Copy link
Member

medyagh commented Apr 16, 2020

@ps-feng if you verify those hints helped, we could add this to the documentation.
that accessing service is different in windows and mac, only on Docker driver. and you will need to keep your terminal open (for a SSH tunnel that we have to make)

@ps-feng
Copy link
Author

ps-feng commented Apr 16, 2020

@medyagh thanks for replying. I just tested it and the minikube service hello-node command finishes right after * Opening service default/hello-node in default browser..., so it's not working.

I've noticed what you've said for minikube dashboard, was this one supposed to work that way too?

Tested both on CMD and PowerShell.

@ps-feng
Copy link
Author

ps-feng commented Apr 16, 2020

Also, minikube tunnel doesn't work either, if that helps:

$ minikube tunnel
Status:
        machine: minikube
        pid: 32
        route: 10.96.0.0/12 -> 172.17.0.5
        minikube: Running
        services: []
    errors:
                minikube: no errors
                router: error adding route:  Correcto
, 2
                loadbalancer emulator: no errors
Status:
        machine: minikube
        pid: 32
        route: 10.96.0.0/12 -> 172.17.0.5
        minikube: Running
        services: []
    errors:
                minikube: no errors
                router: error adding route: Error en la adici�n de la ruta: El objeto ya existe.

, 3
                loadbalancer emulator: no errors

The error says "Error adding route: The object already exists".

@medyagh
Copy link
Member

medyagh commented Apr 17, 2020

@ps-feng thanks for provding more details, could u please provide me the other version and also the full output of the commands by adding this to them

minikube service hello-node--alsologtostderr -v=8

this sounds like a bug !

@medyagh
Copy link
Member

medyagh commented Apr 17, 2020

@ps-feng it seems like you found a bug !

I believe in this code we were supposed to include windows but missed it

		if runtime.GOOS == "darwin" && co.Config.Driver == oci.Docker {
			startKicServiceTunnel(svc, cname)
			return
		}

@medyagh
Copy link
Member

medyagh commented Apr 17, 2020

@ps-feng I am gonna make a PR woudl you be willing to try my PR to see if it works for you ? I can provide u the binary file to download

@medyagh
Copy link
Member

medyagh commented Apr 17, 2020

@ps-feng thank you again for finding this bug and actually taking the time to file an issue ! I bet many other ppl faced this issue but you took the time and filed this issue.

would you kindly try the binary from this PR to see if that solves the problem ?

#7739
here is the link to download
http://storage.googleapis.com/minikube-builds/7739/minikube-windows-amd64.exe

@medyagh medyagh removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Apr 17, 2020
@medyagh medyagh added this to the v1.10.0 milestone Apr 17, 2020
@Nickoriuk
Copy link

I was facing this issue as well, and the build 7739 linked above now allows me to access the target url in my host browser. I do however get an error when closing the tunnel via Ctrl+C - I'm unsure if this is related or belongs in a new ticket. CLI output below.

CLI output
> .\minikube1.exe delete
! "minikube" profile does not exist, trying anyways.
* Removed all traces of the "minikube" cluster.
> .\minikube1.exe start
* minikube v1.9.2 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Automatically selected the docker driver
* Starting control plane node minikube in cluster minikube
* Creating docker container (CPUs=2, Memory=1989MB) ...
* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
  - kubeadm.pod-network-cidr=10.244.0.0/16
* Enabling addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
>  kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
deployment.apps/hello-minikube created
> kubectl expose deployment hello-minikube --type=NodePort  --port=8080
service/hello-minikube exposed
> .\minikube1.exe service hello-minikube --alsologtostderr
I0417 07:45:18.245756   19960 mustload.go:64] Loading cluster: minikube
I0417 07:45:18.246753   19960 oci.go:268] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0417 07:45:18.309753   19960 host.go:65] Checking if "minikube" exists ...
I0417 07:45:18.371035   19960 api_server.go:144] Checking apiserver status ...
I0417 07:45:18.378036   19960 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0417 07:45:18.439043   19960 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:C:\Users\myuser\.minikube\machines\minikube\id_rsa Username:docker}
I0417 07:45:18.559921   19960 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
I0417 07:45:18.573922   19960 api_server.go:160] apiserver freezer: "7:freezer:/docker/05cc78923e634cbc6fa3212eb82a626393734c7db48e5a0582da9bdd425fdfd1/kubepods/burstable/pod112c60df9e36eeaf13a6dd3074765810/6f84cff19f884fa3a0370398c831280617feb05ed6785cf76d4cd214fe34f6c4"
I0417 07:45:18.582421   19960 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/05cc78923e634cbc6fa3212eb82a626393734c7db48e5a0582da9bdd425fdfd1/kubepods/burstable/pod112c60df9e36eeaf13a6dd3074765810/6f84cff19f884fa3a0370398c831280617feb05ed6785cf76d4cd214fe34f6c4/freezer.state
I0417 07:45:18.589921   19960 api_server.go:174] freezer state: "THAWED"
I0417 07:45:18.589921   19960 api_server.go:184] Checking apiserver healthz at https://127.0.0.1:32771/healthz ...
* Starting tunnel for service hello-minikube.
|-----------|----------------|-------------|------------------------|
| NAMESPACE |      NAME      | TARGET PORT |          URL           |
|-----------|----------------|-------------|------------------------|
| default   | hello-minikube |             | http://127.0.0.1:64553 |
|-----------|----------------|-------------|------------------------|
* Opening service default/hello-minikube in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
* Stopping tunnel for service hello-minikube.
I0417 07:45:34.130858   19960 exit.go:58] WithError(error stopping tunnel)=stopping ssh tunnel: TerminateProcess: Access is denied. called from:
goroutine 1 [running]:
runtime/debug.Stack(0x20, 0x198dbe0, 0x1)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xa4
k8s.io/minikube/pkg/minikube/exit.WithError(0x1b2969b, 0x15, 0x1ddeaa0, 0xc000004ea0)
        /app/pkg/minikube/exit/exit.go:58 +0x3b
k8s.io/minikube/cmd/minikube/cmd.startKicServiceTunnel(0xc00003c0d0, 0xe, 0x1b1238a, 0x8)
        /app/cmd/minikube/cmd/service.go:147 +0x630
k8s.io/minikube/cmd/minikube/cmd.glob..func21(0x2b2b040, 0xc000117620, 0x1, 0x2)
        /app/cmd/minikube/cmd/service.go:83 +0x4d9
github.com/spf13/cobra.(*Command).execute(0x2b2b040, 0xc0001175c0, 0x2, 0x2, 0x2b2b040, 0xc0001175c0)
        /go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:830 +0x2b1
github.com/spf13/cobra.(*Command).ExecuteC(0x2b2ab40, 0x0, 0x0, 0xc000117101)
        /go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914 +0x302
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /app/cmd/minikube/cmd/root.go:108 +0x64f
main.main()
        /app/cmd/minikube/main.go:66 +0xf1
W0417 07:45:34.131858   19960 out.go:201] error stopping tunnel: stopping ssh tunnel: TerminateProcess: Access is denied.
*
X error stopping tunnel: stopping ssh tunnel: TerminateProcess: Access is denied.
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose

@ps-feng
Copy link
Author

ps-feng commented Apr 17, 2020

@medyagh The build you've provided works for me! Both minikube service <service> and minikube tunnel work for me. I don't get the error from @Nickoriuk

@medyagh
Copy link
Member

medyagh commented Apr 17, 2020

@medyagh The build you've provided works for me! Both minikube service <service> and minikube tunnel work for me. I don't get the error from @Nickoriuk

thank you very much for verifying, it will be included in minikube v1.10.0
I will make sure to mention your username in our release notes for reporting this.

@medyagh
Copy link
Member

medyagh commented Apr 17, 2020

I was facing this issue as well, and the build 7739 linked above now allows me to access the target url in my host browser. I do however get an error when closing the tunnel via Ctrl+C - I'm unsure if this is related or belongs in a new ticket. CLI output below.

CLI output

@Nickoriuk do you mind retrying it ? and are you running int in a Admin user poweshell or a regular user powershell ?

@medyagh medyagh added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 17, 2020
@Nickoriuk
Copy link

I was facing this issue as well, and the build 7739 linked above now allows me to access the target url in my host browser. I do however get an error when closing the tunnel via Ctrl+C - I'm unsure if this is related or belongs in a new ticket. CLI output below.
CLI output

@Nickoriuk do you mind retrying it ? and are you running int in a Admin user poweshell or a regular user powershell ?

I was using an admin user powershell. I've tried again and the error is no longer occurring.

@medyagh medyagh self-assigned this Apr 20, 2020
@medyagh
Copy link
Member

medyagh commented Apr 20, 2020

this issue was fixed by #7739

@medyagh medyagh closed this as completed Apr 20, 2020
@tarkesh2shar
Copy link

Hey, the binary file doesn't install anything, just open a cmd prompt and exits, did anyone else get the same issue?
http://storage.googleapis.com/minikube-builds/7739/minikube-windows-amd64.exe
Can someone provide a new link?
or update to the latest version?

@karishma6401
Copy link

Hey, Getting this issue while doing minikube service spring-boot-docker-k8s --url and the url is not accessible. I am using minikube version: v1.25.2
Output of minikube service spring-boot-docker-k8s --url :
http://192.168.49.2:30047

  • Starting tunnel for service spring-boot-docker-k8s.
    ! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

@rainjacy
Copy link

rainjacy commented Jun 9, 2022

I got the same issue with me, I am using Minikube version v1.25.2.

@santiagortiiz
Copy link

santiagortiiz commented Jun 11, 2022

Same error following minikube tutorial on windows using

minikube service hello-minikube

  • Starting tunnel for service hello-minikube.
  • Opening service default/hello-minikube in default browser...
    ! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

But the alternative works well

kubectl port-forward service/hello-minikube 7080:8080

CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9
accept-encoding=gzip, deflate, br
accept-language=es-US,es-419;q=0.9,es;q=0.8,en;q=0.7
connection=keep-alive
host=localhost:7080
sec-ch-ua=" Not A;Brand";v="99", "Chromium";v="102", "Google Chrome";v="102"
sec-ch-ua-mobile=?0
sec-ch-ua-platform="Windows"
sec-fetch-dest=document
sec-fetch-mode=navigate
sec-fetch-site=cross-site
sec-fetch-user=?1
upgrade-insecure-requests=1
user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36
BODY:
-no body in request-

@pawelslowik
Copy link

pawelslowik commented Jun 19, 2022

Same issue on my Windows, port-forward suggsted by @santiagortiiz helped, thanks :)

@Hhhrui
Copy link

Hhhrui commented Oct 9, 2024

Same issue on my Windows. I'm using minikube v1.23.9. port-forward does not work for access minikube service.

@rishabhdomadiya
Copy link

rishabhdomadiya commented Feb 17, 2025

Faced the same issue on windows, but port-forwarding worked for me. Thanks @santiagortiiz!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container co/service issues related to the service feature kind/bug Categorizes issue or PR as related to a bug. os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests