-
Notifications
You must be signed in to change notification settings - Fork 759
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod restarting at node startup #1049
Comments
@srggavrilov Couldn't reproduce it. Went from CNI v1.5.7 to v1.6.3 and upgraded kube-proxy from 1.14.7 to 1.14.9 and I don't see any restarts. Some race-condition probably during startup on your end and bumping up initialDelaySeconds might not help here as the below will timeout in 36 secs. We might've to tune the below timeout. amazon-vpc-cni-k8s/scripts/entrypoint.sh Line 51 in f1f9068
Will try to see if we can reproduce so that we can check out what is contributing to the delay in this case. Also, would be helpful if you can run 'aws-cni-support.sh' pre and post upgrade and share the o/p with us. |
@achevuru thanks for checking this. I haven't been able to reproduce this in test environment neither. Could that be related to I think having this timeout configurable is a good idea in any way. Unfortunately I'm not allowed to share the whole 'aws-cni-support.sh' output from prod, cause it contains sensitive data. |
@srggavrilov Yeah, #874 and #1028 will provide the ability to configure the ipamd timeout. Will close the issue as you're no longer running in to it. |
After
the initial
aws-node
Pod startup is failing with:The subsequent runs are always successful.
Pod state after restart:
IPAMd logs:
DaemonSet:
What I've tried and it didn't help:
kube-proxy
has started at:Relates to:
#872
#865
The text was updated successfully, but these errors were encountered: