Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.0.0 hyperv: kube-proxy: timed out waiting for the condition #4034

Closed
jerrytang67 opened this issue Apr 2, 2019 · 3 comments
Closed

1.0.0 hyperv: kube-proxy: timed out waiting for the condition #4034

jerrytang67 opened this issue Apr 2, 2019 · 3 comments
Labels
co/hyperv HyperV related issues co/kube-proxy issues relating to kube-proxy in some way priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@jerrytang67
Copy link

this is the errors when run
minikube start --vm-driver=hyperv --hyperv-virtual-switch=k8s --alsologtostderr

I0402 13:26:41.385275   29436 utils.go:240] > INFO: Leader is minikube
I0402 13:26:41.385275   29436 utils.go:240] > INFO: == Kubernetes addon ensure completed at 2019-04-02T05:26:40+00:00 ==
I0402 13:26:41.385275   29436 utils.go:240] > INFO: == Reconciling with deprecated label ==
I0402 13:26:41.385275   29436 utils.go:240] > INFO: == Reconciling with addon-manager label ==
W0402 13:26:41.386274   29436 logs.go:85] Found kube-addon-manager problem: error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
W0402 13:26:41.386274   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.386274   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.387272   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.387272   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.387272   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.387272   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.388275   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.388275   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.388275   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.388275   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
W0402 13:26:41.389273   29436 logs.go:85] Found kube-addon-manager problem: error: no objects passed to apply
I0402 13:26:41.389273   29436 logs.go:73] Gathering logs for kubernetes-dashboard ...
I0402 13:26:41.389273   29436 ssh_runner.go:137] Run with output: docker logs --tail 200 84994a34a944
I0402 13:26:41.443319   29436 utils.go:240] > 2019/04/02 05:24:45 Using in-cluster config to connect to apiserver
I0402 13:26:41.443319   29436 utils.go:240] ! 2019/04/02 05:24:45 Starting overwatch
I0402 13:26:41.443319   29436 utils.go:240] > 2019/04/02 05:24:45 Using service account token for csrf signing
I0402 13:26:41.444317   29436 utils.go:240] > 2019/04/02 05:25:15 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
I0402 13:26:41.444317   29436 utils.go:240] > Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
I0402 13:26:41.446319   29436 logs.go:73] Gathering logs for storage-provisioner ...
I0402 13:26:41.446319   29436 ssh_runner.go:137] Run with output: docker logs --tail 200 e5afaf361183
I0402 13:26:41.496339   29436 utils.go:240] ! F0402 05:25:28.236418       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
W0402 13:26:41.497317   29436 exit.go:99] Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

!   Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new
X   Problems detected in "kube-addon-manager":
    - error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
    - error: no objects passed to apply
    - error: no objects passed to apply

at the same time

λ kubectl get pods --all-namespaces

NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running            5          3h
kube-system   kube-addon-manager-minikube             1/1       Running            4          3h
kube-system   kube-apiserver-minikube                 1/1       Running            3          3h
kube-system   kube-controller-manager-minikube        1/1       Running            0          1h
kube-system   kube-scheduler-minikube                 1/1       Running            4          3h
kube-system   kubernetes-dashboard-79dd6bfc48-6xq7n   0/1       CrashLoopBackOff   44         3h
kube-system   storage-provisioner                     0/1       CrashLoopBackOff   44         3h

λ kubectl get service --all-namespaces

NAMESPACE     NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
default       kubernetes             ClusterIP   10.96.0.1        <none>        443/TCP   3h
kube-system   kubernetes-dashboard   ClusterIP   10.107.146.193   <none>        80/TCP    3h

ifconfig

$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:2D:31:66:55  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:2dff:fe31:6655/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:143 errors:0 dropped:0 overruns:0 frame:0
          TX packets:54 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:7708 (7.5 KiB)  TX bytes:3232 (3.1 KiB)

eth0      Link encap:Ethernet  HWaddr 00:15:5D:80:D3:15  
          inet addr:192.168.31.108  Bcast:192.168.31.255  Mask:255.255.255.0
          inet6 addr: fe80::215:5dff:fe80:d315/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:305886 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26793 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:352218901 (335.9 MiB)  TX bytes:3449760 (3.2 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:683793 errors:0 dropped:0 overruns:0 frame:0
          TX packets:683793 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:155821194 (148.6 MiB)  TX bytes:155821194 (148.6 MiB)

vethf3a0b57 Link encap:Ethernet  HWaddr 6E:10:F8:89:4A:A5  
          inet6 addr: fe80::6c10:f8ff:fe89:4aa5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:143 errors:0 dropped:0 overruns:0 frame:0
          TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:9710 (9.4 KiB)  TX bytes:4912 (4.7 KiB)

docker images -a

$ docker images -a
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                     v1.14.0             5cd54e388aba        7 days ago          82.1MB
k8s.gcr.io/kube-apiserver                 v1.14.0             ecf910f40d6e        7 days ago          210MB
k8s.gcr.io/kube-scheduler                 v1.14.0             00638a24688b        7 days ago          81.6MB
k8s.gcr.io/kube-controller-manager        v1.14.0             b95b1efa0436        7 days ago          158MB
k8s.gcr.io/kube-addon-manager             v9.0                119701e77cbc        2 months ago        83.1MB
k8s.gcr.io/coredns                        1.3.1               eb516548c180        2 months ago        40.3MB
k8s.gcr.io/kubernetes-dashboard-amd64     v1.10.1             f9aed6605b81        3 months ago        122MB
k8s.gcr.io/etcd                           3.3.10              2c4adeb21b4f        4 months ago        258MB
k8s.gcr.io/pause                          3.1                 da86e6ba6ca1        15 months ago       742kB
gcr.io/k8s-minikube/storage-provisioner   v1.8.1              4689081edb10        17 months ago       80.8MB

docker ps -a

$ docker ps -a
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                     PORTS               NAMES
7b339d635e54        f9aed6605b81           "/dashboard --insecu…"   4 minutes ago       Exited (1) 3 minutes ago                       k8s_kubernetes-dashboard_kubernetes-dashboard-79dd6bfc48-6xq7n_kube-system_cc04be5e-54fa-11e9-9d33-00155d80d315_44
d3a3b9b71e51        4689081edb10           "/storage-provisioner"   4 minutes ago       Exited (1) 3 minutes ago                       k8s_storage-provisioner_storage-provisioner_kube-system_3d2ec633-54fa-11e9-9d33-00155d80d315_44
8611c5c49805        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                     k8s_POD_kubernetes-dashboard-79dd6bfc48-6xq7n_kube-system_cc04be5e-54fa-11e9-9d33-00155d80d315_3
d350a932205d        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                     k8s_POD_storage-provisioner_kube-system_3d2ec633-54fa-11e9-9d33-00155d80d315_4
1a6ebcc7dc43        00638a24688b           "kube-scheduler --bi…"   2 hours ago         Up 2 hours                                     k8s_kube-scheduler_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_4
254e26995131        b95b1efa0436           "kube-controller-man…"   2 hours ago         Up 2 hours                                     k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_2899d819dcdb72186fb15d30a0cc5a71_0
35e7c989430f        ecf910f40d6e           "kube-apiserver --ad…"   2 hours ago         Up 2 hours                                     k8s_kube-apiserver_kube-apiserver-minikube_kube-system_1a21583fe983242004a09eb2c3665b91_3
3e28e7121156        2c4adeb21b4f           "etcd --advertise-cl…"   2 hours ago         Up 2 hours                                     k8s_etcd_etcd-minikube_kube-system_8bd964a115149438a62d022bdef5f109_5
cd548475fd7e        119701e77cbc           "/opt/kube-addons.sh"    2 hours ago         Up 2 hours                                     k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_4
8a5084e11645        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                     k8s_POD_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_5
79345529c8a8        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                     k8s_POD_kube-controller-manager-minikube_kube-system_2899d819dcdb72186fb15d30a0cc5a71_0
067e135faf6f        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                     k8s_POD_kube-apiserver-minikube_kube-system_1a21583fe983242004a09eb2c3665b91_3
bff03fc1c516        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                     k8s_POD_etcd-minikube_kube-system_8bd964a115149438a62d022bdef5f109_6
fb4210eac3ca        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                     k8s_POD_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_7
bc2972ff7b08        119701e77cbc           "/opt/kube-addons.sh"    2 hours ago         Exited (137) 2 hours ago                       k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_3
2e033f4010e1        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Exited (0) 2 hours ago                         k8s_POD_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_6
31f95ee92223        00638a24688b           "kube-scheduler --bi…"   2 hours ago         Exited (2) 2 hours ago                         k8s_kube-scheduler_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_3
8dc9daa33ca3        ecf910f40d6e           "kube-apiserver --ad…"   2 hours ago         Exited (0) 2 hours ago                         k8s_kube-apiserver_kube-apiserver-minikube_kube-system_1a21583fe983242004a09eb2c3665b91_2
ecc97a6d00e3        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Exited (0) 2 hours ago                         k8s_POD_kube-apiserver-minikube_kube-system_1a21583fe983242004a09eb2c3665b91_2
df706a933648        2c4adeb21b4f           "etcd --advertise-cl…"   2 hours ago         Exited (0) 2 hours ago                         k8s_etcd_etcd-minikube_kube-system_8bd964a115149438a62d022bdef5f109_4
38fb756dc525        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Exited (0) 2 hours ago                         k8s_POD_etcd-minikube_kube-system_8bd964a115149438a62d022bdef5f109_5
c09efc9eb25e        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Exited (0) 2 hours ago                         k8s_POD_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_4

i try to get exited container logs
it is

F0402 07:06:14.210660       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

i try to run minikube delete and delete .kube and .minikube many time,but always got those errors ,it seems kube-proxy and kube-dns pod not on,now ,what can i do??

@marcosdiez
Copy link
Contributor

Hi! I had a similar problem and my solution is here: #4052
Since you are on Windows, my solution won't work 100% for you, but maybe you could try something similar.
make just create a resolv.conf file with nameserver 8.8.8.8 as it's only value and trying to use that one.

Could you please try yourself and comment if the solution works for you as well ? Thank you!

@tstromberg tstromberg changed the title minikube 1.0.0 can not boot success on windows10 1.0.0 hyperv: waiting for k8s-app=kube-proxy: timed out waiting for the condition Apr 5, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Apr 5, 2019

It seems like the proxy isn't being scheduled here.

Do you mind adding the output of minikube logs? Thanks!

@tstromberg tstromberg added co/hyperv HyperV related issues co/kube-proxy issues relating to kube-proxy in some way labels Apr 5, 2019
@tstromberg tstromberg changed the title 1.0.0 hyperv: waiting for k8s-app=kube-proxy: timed out waiting for the condition 1.0.0 hyperv: kube-proxy: timed out waiting for the condition Apr 5, 2019
@tstromberg tstromberg added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Apr 5, 2019
@tstromberg
Copy link
Contributor

I believe this issue was resolved in the v1.1.0 release. Please try upgrading to the latest release of minikube and run minikube delete to remove the previous cluster state.

If the same issue occurs, please re-open this bug. Thank you opening this bug report, and for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/hyperv HyperV related issues co/kube-proxy issues relating to kube-proxy in some way priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

3 participants