-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process errors with i/o timeout against the kube api endpoint #542
Comments
I should also note, I have a "jumppod" which is just running ubuntu with kubectl installed and a RBAC setup, relevant deployment bits:
and RBAC:
and from that jumppod I can run:
so from inside of the cluster it's totally OK to talk to the API using a cluster config. |
Also interesting, if I build the
perhaps there is something "wrong" with the container I am using? or perhaps that container is incompatible in some way? |
The more I poke at this error the more I realize it's probably the configuration of the host kube-node. Tracking issue with the aws-vpc-cni here: aws/amazon-vpc-cni-k8s#180 |
Terminating those ec2 instances and letting the kops-configured instance group autoscaler has worked, the pods can now resolve DNS and connect to other hosts. |
Thanks for the detailed report @xrl!
I am not quite sure I understand. Are you saying the issue is resolved? If not, can you try running your |
@mxinden correct, the issue is "resolved". more like, it's not closing! |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: I installed kube-state-metrics from kube-prometheus and it has been in a restart loop, timing out while talking to the kube API.
What you expected to happen: kube-state-metrics should not time out talking to the kube API.
How to reproduce it (as minimally and precisely as possible):
Use this container definitions:
The full deployment, which I'm running verbatim, is here: https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/manifests/kube-state-metrics-deployment.yaml
Once that's installed, then run
k get pods
:and then look at the logs:
Anything else we need to know?:
I have confirmed from a standalone pod that I can access the kube cluster:
Environment:
kubectl version
):1.4.0
or1.3.1
they both fail the same way.The text was updated successfully, but these errors were encountered: