Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race condition between CNI plugin install and aws-k8s-agent startup #282

Closed
mx-shift opened this issue Jan 8, 2019 · 26 comments
Closed
Labels
bug priority/P0 Highest priority. Someone needs to actively work on this.
Milestone

Comments

@mx-shift
Copy link

mx-shift commented Jan 8, 2019

My employer runs GitLab CI using the Kubernetes executor which runs each CI job as a separate pod. Since the workload is bursty, cluster-autoscaler is also enabled. When certain CI stages begin, 100+ pods are created rapidly and a large number of them are unscheduleable until the autoscaler has spun up enough new nodes. GitLab CI will wait up to 10m for the pod to start before assuming the job failed.

Occasionally, one of these pods will hit the 10m timeout with the following events according to kubectl describe po:

Events:
  Type     Reason                  Age                From                                                 Message
  ----     ------                  ----               ----                                                 -------
  Normal   Scheduled               14m                default-scheduler                                    Successfully assigned gitlab-ci-runner/runner-cad974a7-project-1-concurrent-0cvwc7 to ip-172-30-13-60.us-west-2.compute.internal

  Warning  FailedCreatePodSandBox  14m                kubelet, ip-172-30-13-60.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "c96fe97cd035ce410f537254a20102485c83eb275f3fc317f0e2b5ea5c4b7720" network for pod "runner-cad974a7-project-1-concurrent-0cvwc7": NetworkPlugin cni failed to set up pod "runner-cad974a7-project-1-concurrent-0cvwc7_gitlab-ci-runner" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", failed to clean up sandbox container "c96fe97cd035ce410f537254a20102485c83eb275f3fc317f0e2b5ea5c4b7720" network for pod "runner-cad974a7-project-1-concurrent-0cvwc7": NetworkPlugin cni failed to teardown pod "runner-cad974a7-project-1-concurrent-0cvwc7_gitlab-ci-runner" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"]
  Normal   SandboxChanged          4m (x46 over 14m)  kubelet, ip-172-30-13-60.us-west-2.compute.internal  Pod sandbox changed, it will be killed and re-created.

So, the pod failed during setup due to the CNI plugin failing to connect to aws-k8-cni. Looking at install-aws.sh, the CNI plugin is installed before aws-k8s-agent starts. kubelet assumes that CNI is ready as soon as the plugin is installed. If there are pods waiting to be scheduled, there is a narrow chance that a pod will be scheduled and CNI will fail because aws-k8s-agent hasn't started responding yet.

Unfortunately, the CNI plugin seems to also report a failure during the sandbox cleanup (again due to connection refused when trying to do CNI teardown) which prevents the pod from being rescheduled or retried until the container is manually removed.

@mx-shift
Copy link
Author

I started looking into the teardown failures as fixing those would likely avoid pods getting stuck. In plugins/routed-eni/cni.go, del() always attempts to talk to ipamd and causes a failure if ipamd isn't responding. Since setting up the host veth is one of the last steps done by add(), del() can use the existence of the host veth as a quick check before attempting to talk to ipamd. That way in cases where setup failed, teardown can return success quickly regardless of whether ipamd is listening.

@mogren
Copy link
Contributor

mogren commented Jan 10, 2019

Oh, interesting. Thanks a lot for looking into this issue and letting us know what you've found so far.

I have a feeling this might be related to some other issues we've seen like #272, #204 or #180, but I don't know enough yet.

@mogren
Copy link
Contributor

mogren commented Mar 29, 2019

Was unable to try the quick veth check fix, punting to v1.5

@mogren mogren modified the milestones: v1.4, v1.5 Mar 29, 2019
@mogren mogren modified the milestones: v1.5, v1.6 Jun 5, 2019
@jtszalay
Copy link

jtszalay commented Jun 27, 2019

I'm at the same employer that @kc8apf was at and am handling this work now. When I do a kubectl describe po <name> I get the following events:

Events:
  Type     Reason                  Age               From                                                  Message
  ----     ------                  ----              ----                                                  -------
  Normal   Scheduled               4m                default-scheduler                                     Successfully assigned gitlab-ci-runner/runner-1bcd72b4-project-1-concurrent-1012nf6l to ip-172-30-25-134.us-west-2.compute.internal
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "491f4efa828fc1268151d56b5039796eef079bd43cdf8c77d6fc19dc2f8a642f" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cf73b20a6e755269ff781e4022d7624cc396062eaab83031cd5f290ab24db311" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "646c27118964586386882d5b31a4c9079c822dde99b4884694bd8dafcf38592c" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "78761b304bcfc8530ac3c64fd1389e33daf113a45200280fa81bf85cdabc9928" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b2741e895e6da9745f8a6ae2d2447f74fc917f6240191b50f3421adefb621566" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1139869224ee604030287cb96c43bc7adecc33a28a2c11155b638bb7ff002f73" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "316b92154ad06c474ec66170b1c3e1708cc98ee42e74736f1d57498924454e68" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e869885bb98570241947d791e34765205ab7d2b7e1d96e37a254faf56f20e20d" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedCreatePodSandBox  4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d19b1b6effbf693b506123a0f4b8d87546ba3f2c3c16072f46f95313857e0d6d" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Warning  FailedSync              4m                kubelet, ip-172-30-25-134.us-west-2.compute.internal  error determining status: rpc error: code = Unknown desc = Error: No such container: 646c27118964586386882d5b31a4c9079c822dde99b4884694bd8dafcf38592c
  Warning  FailedCreatePodSandBox  4m (x3 over 4m)   kubelet, ip-172-30-25-134.us-west-2.compute.internal  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8fe86f9a9db6731d81ed2ff7688e6f42e9425b0158551b776c6907dcf14dae0a" network for pod "runner-1bcd72b4-project-1-concurrent-1012nf6l": NetworkPlugin cni failed to set up pod "runner-1bcd72b4-project-1-concurrent-1012nf6l_gitlab-ci-runner" network: add cmd: failed to assign an IP address to container
  Normal   SandboxChanged          4m (x12 over 4m)  kubelet, ip-172-30-25-134.us-west-2.compute.internal  Pod sandbox changed, it will be killed and re-created.

Running the sudo bash /opt/cni/bin/aws-cni-support.sh command on that node gets me the following:

[ec2-user@ip-172-30-25-134 ~]$ sudo bash /opt/cni/bin/aws-cni-support.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   179  100   179    0     0    179      0  0:00:01 --:--:--  0:00:01  174k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100     2  100     2    0     0      2      0  0:00:01 --:--:--  0:00:01  2000
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   106  100   106    0     0    106      0  0:00:01 --:--:--  0:00:01  103k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    83  100    83    0     0     83      0  0:00:01 --:--:--  0:00:01 83000
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    28  100    28    0     0     28      0  0:00:01 --:--:--  0:00:01 28000
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 61884  100 61884    0     0  61884      0  0:00:01 --:--:--  0:00:01 29.5M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to localhost port 10255: Connection refused

@mikerlt
Copy link

mikerlt commented Jul 1, 2019

I am running into the exact same problem, when using cluster autoscaling

@ianCambrio
Copy link

Administering a cluster through argo but I'm seeing the same error on my pods. Trying to autoscale r5 large spot instances.

@t0ny-peng
Copy link

t0ny-peng commented Jul 2, 2019

I'm in the same situation as @kc8apf. We are using EKS+Autoscalar to run out Gitlab CI runner. The cluster version is 1.13 eks.2 and CNI version is 1.5.0.

Sometimes the pod is stuck at ContainerCreating stage and its description is like:

Events:
  Type     Reason                  Age              From                                                 Message
  ----     ------                  ----             ----                                                 -------
  Warning  FailedScheduling        4m (x2 over 4m)  default-scheduler                                    0/3 nodes are available: 1 node(s) didn't match node selector, 3 Insufficient cpu.
  Normal   TriggeredScaleUp        4m               cluster-autoscaler                                   pod triggered scale-up: [{gitlab-x86-64-asg 3->4 (max: 16)}]
  Warning  FailedScheduling        4m (x2 over 4m)  default-scheduler                                    0/4 nodes are available: 1 node(s) didn't match node selector, 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient cpu.
  Normal   Scheduled               3m               default-scheduler                                    Successfully assigned default/runner-ex6ucuta-project-136-concurrent-48nnx7 to ip-172-31-2-101.us-west-2.compute.internal
  Warning  FailedCreatePodSandBox  3m               kubelet, ip-172-31-2-101.us-west-2.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "d6f10d64bd0133c14aa4e95cf50565c459198f652656b254dd09745c713e24e5" network for pod "runner-ex6ucuta-project-136-concurrent-48nnx7": NetworkPlugin cni failed to set up pod "runner-ex6ucuta-project-136-concurrent-48nnx7_default" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", failed to clean up sandbox container "d6f10d64bd0133c14aa4e95cf50565c459198f652656b254dd09745c713e24e5" network for pod "runner-ex6ucuta-project-136-concurrent-48nnx7": NetworkPlugin cni failed to teardown pod "runner-ex6ucuta-project-136-concurrent-48nnx7_default" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"]

This only happens when a new node scaling-up is triggered by new jobs. After that, this node isn't able to take any pods because of this CNI failure.

I compared the ipamd.log between a successful node and a failed node,

successful_ipamd.log.2019-07-02-18.log
failed_ipamd.log.2019-07-02-18.log

An interesting finding is that, on the successful node, pod IP assignment happened after all the 30 IP have been added. Note that log Add/Update for Pod runner-xxx first showed up on line 156, after all the Added ENI(....

However on the failed node, the pod tries to get an IP(description not accurate) before the ENI was even added! Starting from line 58, it tried 12 times to get an IP but failed apparently because the CNI is not ready. After 12 attempts, it gave up and then the CNI had the chance to add 30 IP to the IP pool.

Any thought on how to coordinate the starting sequence? Thanks

@stijndehaes
Copy link

A solution could be to run 2 containers in the daemonset.
1 to install the CNI and one that runs /app/aws-k8s-agent.
The one that install the CNI can do a grpc health check or call an endpoint to check that /app/aws-k8s-agent is fully up and running before installing the CNI.
Calico for example runs 2 containers in the daemonset: https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml

However they do not have any coordination between them in the install CNI script: https://github.com/projectcalico/cni-plugin/blob/master/k8s-install/scripts/install-cni.sh

@stijndehaes
Copy link

I made a naive implementation of the wait mechanism (meaning I added a sleep 10s) here:
https://github.com/stijndehaes/amazon-vpc-cni-k8s/blob/feature/test-naive-dependency/config/v1.4/aws-k8s-cni.yaml

According to my tests this solves the problem. My tests exist of launching a 100 pods via airflow at once, having the cluster scale from 3 to 15 machines because of that and seeing if they succeed. In the version without the sleep this fails consistently, with adding a sleep this succeeds (at least the 2 times I tested).
Does making this POC better sound like a good idea? Or should we add the copy logic to the go code? In the go code we can check if the ENI has been attached and if new ip addresses have been received.

@jtszalay
Copy link

With the above changes applied I now see the following message

  Warning  FailedCreatePodSandBox  2m (x4 over 2m)   kubelet, ip.us-west-2.compute.internal  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "48da484a03e11f033f32c6b69449001b58db1aec21c2194cd5b057f33e28640a" network for pod "runner-94da68a0-project-1-concurrent-372d5rx": NetworkPlugin cni failed to set up pod "runner-94da68a0-project-1-concurrent-372d5rx_gitlab-ci-runner" network: conf.VethPrefix must be less than 4 characters long

@sc250024
Copy link

This is happening to us as well.

CNI version: 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.5.0
EKS version: v1.13.7-eks-c57ff8

Is there a fix coming down the pipeline for this? It's legitimately affecting production workloads right now.

@clementlecorre
Copy link

I have the same problem !

CNI version : 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.5.0
EKS version v1.13.7-eks-c57ff8

mogren pushed a commit to mogren/amazon-vpc-cni-k8s that referenced this issue Jul 31, 2019
Wait for the ipamd health check to be SERVING before copying in the CNI
binary and config file. aws#282
mogren pushed a commit to mogren/amazon-vpc-cni-k8s that referenced this issue Jul 31, 2019
Wait for the ipamd health check to be SERVING before copying in the CNI
binary and config file. aws#282
mogren pushed a commit to mogren/amazon-vpc-cni-k8s that referenced this issue Aug 1, 2019
Wait for the ipamd health check to be SERVING before copying in the CNI
binary and config file. aws#282
mogren pushed a commit to mogren/amazon-vpc-cni-k8s that referenced this issue Aug 1, 2019
Wait for the ipamd health check to be SERVING before copying in the CNI
binary and config file. aws#282
mogren pushed a commit to mogren/amazon-vpc-cni-k8s that referenced this issue Aug 1, 2019
Wait for the ipamd health check to be SERVING before copying in the CNI
binary and config file. aws#282
mogren pushed a commit that referenced this issue Aug 1, 2019
Wait for the ipamd health check to be SERVING before copying in the CNI
binary and config file. #282
@Mewzyk
Copy link

Mewzyk commented Aug 2, 2019

I am having the same problem right now.
CNI version : 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.5.0
EKS version v1.13.7-eks-c57ff8

I just migrated from KOPS which did not have this problem. At this point in time, my EKS cluster cannot produce the same functionality.

That is:
Fresh Cluster + Cluster autoscaler + Run 10 Jobs

@sc250024
Copy link

sc250024 commented Aug 2, 2019

@Mewzyk This is an issue with the CNI provider, not so much EKS, although this CNI is default on EKS.

A much better test would be to install this CNI on Kops to see if the same error occurs.

@mogren
Copy link
Contributor

mogren commented Aug 2, 2019

@Mewzyk I just made a release candidate with a potential fix of this issue. It's not yet a final release and we are still doing testing, but if you would like to test the build the instructions are here:

https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.5.1-rc1

@Mewzyk
Copy link

Mewzyk commented Aug 4, 2019

@mogren Working so far. Thanks for the patch.

@mogren
Copy link
Contributor

mogren commented Aug 4, 2019

Thanks for the update.

A note, the recent v1.5.1 release does not have the startup fix! I will make a v1.5.2 soon with the quick config fix plus the changes currently in v1.5.1-rc1.

@edmorley
Copy link
Contributor

edmorley commented Aug 6, 2019

I believe one of the reasons more people are reporting this issue now than before, is that:

  • Kubernetes 1.13+ has a regression where the kubelet won't retry PodSandbox creation for pods with a restartPolicy of Never (kubelet won't retry PodSandbox creation for pods with restart policy "Never" kubernetes/kubernetes#79398) - meaning that pods get stuck as ContainerCreating after experiencing the CNI plugin race condition, and are not able to self-recover
  • The above regression has been fixed in Kubernetes 1.13.8, however EKS is still running 1.13.7
  • More people are now finally upgrading to EKS 1.13 now that it's (a) generally available on AWS, (b) version 1.10 has been EOLed

@mogren since the fix for the Kubernetes regression is in kubelet, would it be possible to upgrade the version in the AMI to 1.13.8 as a stop-gap until the control plane can also be upgraded so that it's in sync? (See aws/containers-roadmap/issues/411)

@mogren
Copy link
Contributor

mogren commented Aug 7, 2019

@edmorley Hey, thanks a lot for the tip. We can definitely bump the kubelet version in the worker node AMI before the the controlplane gets updated. That said, we will try to update both versions, since some users have Prometheus configured to alert on mismatched versions.

@jorge-vargas-tfs
Copy link

We created a custom AMI with kubelet version 1.13.8 and we can confirm it fixes this issue. Also, control plane is now on 1.13.8, is the new AMI coming anytime soon?

@mattmi88
Copy link

I can confirm another configuration that works - as of today: 2019-08-13.

  • EKS K8S control plane is v1.13.8-eks-a977ba (from kubectl version)
  • EKS kubelets on the worker nodes are v1.13.7-eks-c57ff8 (from kubectl get node)
  • Manually updated aws-cni daemonset to v1.5.3 (kubectl edit ds -n kubesystem aws-cni)
  • Cluster autoscaler is v1.15.1

I posted a job with a specific affinity and toleration to target an ASG that had 0 nodes in it. Cluster autoscaler added a node to the ASG. When the node became ready, there was a little delay, but no FailedCreatePodSandBox errors in the pod events.

@sc250024
Copy link

@mattmi88 Ditto!

@mogren
Copy link
Contributor

mogren commented Aug 22, 2019

Fixed for new nodes in v1.5.3, and merged back to master.

@mogren mogren closed this as completed Aug 22, 2019
@lgarrett-isp
Copy link

I encountered this issue just this morning with a build off the tip of amazon-vpc-ni-k8s master branch. Config:

  • EKS K8S control plane is v1.13.8-eks-a977ba
  • EKS kubelets on the worker nodes are v1.13.8-eks-cd3eb0
  • aws-cni v1.5.3 does not support our instance type (c5.metal) even though it was checked into master before the release; so I pulled master and built our own image and updated the aws-node daemonset--are there things in v1.5.3 that fix this that are NOT in master? Is there a regression in master that will reveal this bug again in the next release?

What I found is that while an aws-node Pod is (re)deploying (so, during CNI upgrades and worker nodes scaling up) to a given instance, two things can happen if Pods are deleted or added to that instance during the time that the aws-nodeis spinning down and re-initializing:

  1. Any Pods removed from that specific Node during the aws-node deployment window will end up in a race condition where kubelet reaches out to the CNI to release the IP but the CNI seems to have no reference to the deleted Pod or IP and reports failure. This continues forever until aws-node (or is still going hours later at least). It does not appear to affect IP allocation or any other behavior that I can see at this point other than kubelet requesting "Delete Network" once per minute for Pods that no longer exist and the CNI and kubelet reporting the error.
  2. The original error reported in this issue--Pods that try to spin up on the instance while the aws-node Pod is re-deploying sometimes get stuck permanently in "ContainerCreating" with the following error (and kubectl delete pod immediately remedies the issue):

failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7af9a6ac803344de0bb5fcd79c3852107cf473ce8f89a6c8b8cf08cd99dad226" network for pod "datadog-kube-state-metrics-cc4669b55-vtzz9": NetworkPlugin cni failed to set up pod "datadog-kube-state-metrics-cc4669b55-vtzz9_default" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", failed to clean up sandbox container "7af9a6ac803344de0bb5fcd79c3852107cf473ce8f89a6c8b8cf08cd99dad226" network for pod "datadog-kube-state-metrics-cc4669b55-vtzz9": NetworkPlugin cni failed to teardown pod "datadog-kube-state-metrics-cc4669b55-vtzz9_default" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"

@tibin-mfl
Copy link

tibin-mfl commented Dec 18, 2020

@mogren I am facing a similar issue new pod is getting scheduled to a node.

Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "9f3dcceb6da3b53d10af353d83b374cf4359d38d85da5cc813925f58f02a232c" network for pod "nginx-deployment-56c774d974-kqgcm":
 networkPlugin cni failed to set up pod "nginx-deployment-56c774d974-kqgcm_nginx-example" network: add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", 
failed to clean up sandbox container "9f3dcceb6da3b53d10af353d83b374cf4359d38d85da5cc813925f58f02a232c" network for pod "nginx-deployment-56c774d974-kqgcm": networkPlugin cni failed to teardown pod "nginx-deployment-56c774d974-kqgcm_nginx-example"
 network: del cmd: error received from DelNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"]

EKS: 1.17
aws cni : 1.7.5

@jayanthvn
Copy link
Contributor

@mogren I am facing a similar issue new pod is getting scheduled to a node.

Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "9f3dcceb6da3b53d10af353d83b374cf4359d38d85da5cc813925f58f02a232c" network for pod "nginx-deployment-56c774d974-kqgcm":
 networkPlugin cni failed to set up pod "nginx-deployment-56c774d974-kqgcm_nginx-example" network: add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", 
failed to clean up sandbox container "9f3dcceb6da3b53d10af353d83b374cf4359d38d85da5cc813925f58f02a232c" network for pod "nginx-deployment-56c774d974-kqgcm": networkPlugin cni failed to teardown pod "nginx-deployment-56c774d974-kqgcm_nginx-example"
 network: del cmd: error received from DelNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"]

EKS: 1.17
aws cni : 1.7.5

@tibin-mfl - This is a very old issue. Please file a new issue and Can you please share the logs ?

bash /opt/cni/bin/aws-cni-support.sh

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug priority/P0 Highest priority. Someone needs to actively work on this.
Projects
None yet
Development

No branches or pull requests