Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

waiting for k8s-app=kube-proxy: timed out waiting for the condition #3936

Closed
Fionajeremychik opened this issue Mar 22, 2019 · 5 comments
Closed
Labels
co/kube-proxy issues relating to kube-proxy in some way triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@Fionajeremychik
Copy link

minikube start
πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ’‘ Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
πŸ”„ Restarting existing virtualbox VM for "minikube" ...
βŒ› Waiting for SSH access ...
πŸ“Ά "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸ”„ Relaunching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserver proxy
πŸ’£ Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new

@tstromberg tstromberg changed the title minikube crashed waiting for k8s-app=kube-proxy: timed out waiting for the condition Mar 23, 2019
@efengx
Copy link

efengx commented Mar 24, 2019

The same mistake:
macos: 10.14.3
kubernetes: 1.13.4
minikube: 0.35.0

I0324 16:41:08.615930   12409 utils.go:224] > Mar 24 08:40:59 minikube kubelet[2929]: E0324 08:40:59.196633    2929 pod_workers.go:190] Error syncing pod 72e5d3bd-4e08-11e9-84a6-0800272e1a7b ("storage-provisioner_kube-system(72e5d3bd-4e08-11e9-84a6-0800272e1a7b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(72e5d3bd-4e08-11e9-84a6-0800272e1a7b)"
I0324 16:41:08.615976   12409 utils.go:224] > Mar 24 08:41:07 minikube kubelet[2929]: E0324 08:41:07.197921    2929 pod_workers.go:190] Error syncing pod 72c5fec4-4e08-11e9-84a6-0800272e1a7b ("kubernetes-dashboard-ccc79bfc9-mj6z2_kube-system(72c5fec4-4e08-11e9-84a6-0800272e1a7b)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-ccc79bfc9-mj6z2_kube-system(72c5fec4-4e08-11e9-84a6-0800272e1a7b)"
W0324 16:41:08.620405   12409 exit.go:87] Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition
πŸ’£  Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new

pod info:

fengxiangdeMacBook-Pro% kubectl get po --all-namespaces                                         
NAMESPACE     NAME                                   READY   STATUS             RESTARTS   AGE
kube-system   etcd-minikube                          1/1     Running            3          86m
kube-system   kube-addon-manager-minikube            1/1     Running            3          82m
kube-system   kube-apiserver-minikube                1/1     Running            3          86m
kube-system   kube-controller-manager-minikube       1/1     Running            1          56m
kube-system   kube-scheduler-minikube                1/1     Running            3          86m
kube-system   kubernetes-dashboard-ccc79bfc9-mj6z2   0/1     CrashLoopBackOff   14         78m
kube-system   storage-provisioner                    0/1     CrashLoopBackOff   26         78m

kubernetes-deshboard info:

fengxiangdeMacBook-Pro% kubectl describe -n kube-system pod kubernetes-dashboard-ccc79bfc9-mj6z2
Name:               kubernetes-dashboard-ccc79bfc9-mj6z2
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               minikube/10.0.2.15
Start Time:         Sun, 24 Mar 2019 15:43:00 +0800
Labels:             addonmanager.kubernetes.io/mode=Reconcile
                    app=kubernetes-dashboard
                    pod-template-hash=ccc79bfc9
                    version=v1.10.1
Annotations:        <none>
Status:             Running
IP:                 172.17.0.2
Controlled By:      ReplicaSet/kubernetes-dashboard-ccc79bfc9
Containers:
  kubernetes-dashboard:
    Container ID:   docker://d11f77070e0a4133d7b7001f7ede1379f643f5d45e14a35d042cc7e8383008b2
    Image:          k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:       docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
    Port:           9090/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 24 Mar 2019 16:52:01 +0800
      Finished:     Sun, 24 Mar 2019 16:52:01 +0800
    Ready:          False
    Restart Count:  13
    Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8jvnj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-8jvnj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8jvnj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       72m                    default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-ccc79bfc9-mj6z2 to minikube
  Warning  Failed          68m (x2 over 70m)      kubelet, minikube  Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1": rpc error: code = Unknown desc = context canceled
  Warning  Failed          68m (x2 over 70m)      kubelet, minikube  Error: ErrImagePull
  Normal   BackOff         68m (x2 over 70m)      kubelet, minikube  Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Warning  Failed          68m (x2 over 70m)      kubelet, minikube  Error: ImagePullBackOff
  Normal   Pulling         67m (x3 over 72m)      kubelet, minikube  pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Pulling         58m                    kubelet, minikube  pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Pulling         41m (x4 over 51m)      kubelet, minikube  pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Warning  Failed          40m (x4 over 45m)      kubelet, minikube  Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1": rpc error: code = Unknown desc = context canceled
  Warning  Failed          40m (x4 over 45m)      kubelet, minikube  Error: ErrImagePull
  Warning  Failed          39m (x7 over 45m)      kubelet, minikube  Error: ImagePullBackOff
  Normal   BackOff         34m (x23 over 45m)     kubelet, minikube  Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Pulled          29m (x3 over 30m)      kubelet, minikube  Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
  Normal   SandboxChanged  24m                    kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          23m (x4 over 24m)      kubelet, minikube  Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
  Normal   Created         23m (x4 over 24m)      kubelet, minikube  Created container
  Normal   Started         23m (x4 over 24m)      kubelet, minikube  Started container
  Warning  BackOff         4m22s (x103 over 24m)  kubelet, minikube  Back-off restarting failed container

storage-provisioner info:

engxiangdeMacBook-Pro% kubectl describe -n kube-system pod storage-provisioner                 
Name:               storage-provisioner
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               minikube/10.0.2.15
Start Time:         Sun, 24 Mar 2019 15:43:00 +0800
Labels:             addonmanager.kubernetes.io/mode=Reconcile
                    integration-test=storage-provisioner
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"s...
Status:             Running
IP:                 10.0.2.15
Containers:
  storage-provisioner:
    Container ID:  docker://8b1e8eb80421fc2b39f4d4159efb7692a7a0bc26180101de537e976795b911bf
    Image:         gcr.io/k8s-minikube/storage-provisioner:v1.8.1
    Image ID:      docker-pullable://gcr.io/k8s-minikube/storage-provisioner@sha256:088daa9fcbccf04c3f415d77d5a6360d2803922190b675cb7fc88a9d2d91985a
    Port:          <none>
    Host Port:     <none>
    Command:
      /storage-provisioner
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 24 Mar 2019 16:52:18 +0800
      Finished:     Sun, 24 Mar 2019 16:52:18 +0800
    Ready:          False
    Restart Count:  25
    Environment:    <none>
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from storage-provisioner-token-zcv6s (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  tmp:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp
    HostPathType:  Directory
  storage-provisioner-token-zcv6s:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  storage-provisioner-token-zcv6s
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                  From               Message
  ----     ------          ----                 ----               -------
  Normal   Scheduled       73m                  default-scheduler  Successfully assigned kube-system/storage-provisioner to minikube
  Warning  Failed          70m (x2 over 72m)    kubelet, minikube  Failed to pull image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1": rpc error: code = Unknown desc = context canceled
  Warning  Failed          70m (x2 over 72m)    kubelet, minikube  Error: ErrImagePull
  Normal   BackOff         70m (x2 over 72m)    kubelet, minikube  Back-off pulling image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1"
  Warning  Failed          70m (x2 over 72m)    kubelet, minikube  Error: ImagePullBackOff
  Normal   Pulling         70m (x3 over 73m)    kubelet, minikube  pulling image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1"
  Normal   Pulled          63m                  kubelet, minikube  Successfully pulled image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1"
  Normal   Pulled          63m (x2 over 63m)    kubelet, minikube  Container image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" already present on machine
  Normal   Created         63m (x3 over 63m)    kubelet, minikube  Created container
  Normal   Started         63m (x3 over 63m)    kubelet, minikube  Started container
  Warning  BackOff         63m (x2 over 63m)    kubelet, minikube  Back-off restarting failed container
  Normal   SandboxChanged  59m                  kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          58m (x4 over 59m)    kubelet, minikube  Container image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" already present on machine
  Normal   Created         58m (x4 over 59m)    kubelet, minikube  Created container
  Normal   Started         58m (x4 over 59m)    kubelet, minikube  Started container
  Warning  BackOff         57m (x12 over 59m)   kubelet, minikube  Back-off restarting failed container
  Normal   SandboxChanged  52m                  kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          51m (x4 over 52m)    kubelet, minikube  Container image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" already present on machine
  Normal   Created         51m (x4 over 52m)    kubelet, minikube  Created container
  Normal   Started         51m (x4 over 52m)    kubelet, minikube  Started container
  Warning  BackOff         32m (x97 over 52m)   kubelet, minikube  Back-off restarting failed container
  Normal   SandboxChanged  25m                  kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          24m (x4 over 25m)    kubelet, minikube  Container image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" already present on machine
  Normal   Created         24m (x4 over 25m)    kubelet, minikube  Created container
  Normal   Started         24m (x4 over 25m)    kubelet, minikube  Started container
  Warning  BackOff         17s (x119 over 25m)  kubelet, minikube  Back-off restarting failed container

docker images info:

fengxiangdeMacBook-Pro% minikube ssh 'docker images'
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                     v1.13.4             fadcc5d2b066        3 weeks ago         80.3MB
k8s.gcr.io/kube-apiserver                 v1.13.4             fc3801f0fc54        3 weeks ago         181MB
k8s.gcr.io/kube-controller-manager        v1.13.4             40a817357014        3 weeks ago         146MB
k8s.gcr.io/kube-scheduler                 v1.13.4             dd862b749309        3 weeks ago         79.6MB
k8s.gcr.io/kubernetes-dashboard-amd64     v1.10.1             f9aed6605b81        3 months ago        122MB
k8s.gcr.io/coredns                        1.2.6               f59dcacceff4        4 months ago        40MB
k8s.gcr.io/etcd                           3.2.24              3cab8e1b9802        6 months ago        220MB
k8s.gcr.io/kube-addon-manager             v8.6                9c16409588eb        13 months ago       78.4MB
k8s.gcr.io/pause                          3.1                 da86e6ba6ca1        15 months ago       742kB
gcr.io/k8s-minikube/storage-provisioner   v1.8.1              4689081edb10        16 months ago       80.8MB

@efengx
Copy link

efengx commented Mar 24, 2019

The first time I started, there is really no pull to image.
Minikube stop
Minikube start
After that, there is no restart and start to pull.

@efengx
Copy link

efengx commented Mar 24, 2019

If you need to manually refresh the image, please tell me the specific command.

@marcosdiez
Copy link
Contributor

Hi! I had a similar problem and my solution is here: #4052
Could you please try yourself and comment if the solution works for you as well ? Thank you!

@balopat balopat added the co/kube-proxy issues relating to kube-proxy in some way label Apr 4, 2019
@balopat
Copy link
Contributor

balopat commented Apr 4, 2019

Closing for dupe of #3850.
Can you please upgrade to 1.0.0 and attach the logs to #3850? Thanks.

@balopat balopat closed this as completed Apr 4, 2019
@balopat balopat added the triage/duplicate Indicates an issue is a duplicate of other open issue. label Apr 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kube-proxy issues relating to kube-proxy in some way triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants