You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a terraform configuration that sets up a cluster on 1.12 but when moving to 1.13 we are now getting the following error....
Warning FailedCreatePodSandBox 8m3s kubelet, ip-10-241-235-194.us-west-2.compute.internal Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "c5a0458bfd2d15dac31047ba459db66b2679c1497d97784d4a2dda66e4bb5a30" network for pod "bonk-pipeline-1565047792578-driver": NetworkPlugin cni failed to set up pod "bonk-pipeline-1565047792578-driver_gregory" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", failed to clean up sandbox container "c5a0458bfd2d15dac31047ba459db66b2679c1497d97784d4a2dda66e4bb5a30" network for pod "bonk-pipeline-1565047792578-driver": NetworkPlugin cni failed to teardown pod "bonk-pipeline-1565047792578-driver_gregory" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"]
These instances are r3.2xlarge types that we are triggering a scale up on to start up additional nodes. The only Pod that we are purposely scheduling is a Spark Worker via the Spark Launcher.
I've attached the logs like others with similar FailedCreatePodSandBox issues . We are using the latest 1.13 worker AMI etc.
@TechnicalMercenary Hey, sorry about that. I have a release candidate out with a potential fix for this issue. I'm planning to release a final v1.5.2 within the next few days.
We have a terraform configuration that sets up a cluster on 1.12 but when moving to 1.13 we are now getting the following error....
These instances are r3.2xlarge types that we are triggering a scale up on to start up additional nodes. The only Pod that we are purposely scheduling is a Spark Worker via the Spark Launcher.
I've attached the logs like others with similar
FailedCreatePodSandBox
issues . We are using the latest 1.13 worker AMI etc.aws-cni-support.tar.gz
The text was updated successfully, but these errors were encountered: