Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not creating stats for kubernetes 1.15.1 #290

Closed
staticdev opened this issue Jul 21, 2019 · 18 comments
Closed

not creating stats for kubernetes 1.15.1 #290

staticdev opened this issue Jul 21, 2019 · 18 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@staticdev
Copy link

I am using master branch and when I try kubectl top node I get:
error: metrics not available yet. And when I try kubectl top pods I get: W0721 20:01:31.786615 21232 top_pod.go:266] Metrics not available for pod default/pod-deployment-57b99df6b4-khh84, age: 27h31m59.78660593s error: Metrics not available for pod default/pod-deployment-57b99df6b4-khh84, age: 27h31m59.78660593s

@shundezhang
Copy link

I am having issues running metrics server 0.3.3 on k8s 1.15.0 too. I am getting a lot of tls errors in metrics server log:
I0722 01:32:08.512698 1 log.go:172] http: TLS handshake error from 192.168.22.192:43146: EOF I0722 01:32:08.512847 1 log.go:172] http: TLS handshake error from 192.168.22.192:43150: EOF I0722 01:32:08.512911 1 log.go:172] http: TLS handshake error from 192.168.22.192:43148: EOF I0722 01:32:08.512946 1 log.go:172] http: TLS handshake error from 192.168.22.192:43152: EOF I0722 01:32:08.513372 1 log.go:172] http: TLS handshake error from 192.168.22.192:43154: EOF
The source IP 192.168.22.192 is the internal IP of one of the masters. In the api server log of a master I got:
E0722 01:43:55.398927 1 available_controller.go:407] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.28.56:443: Get https://10.101.28.56:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E0722 01:44:00.399770 1 available_controller.go:407] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.28.56:443: Get https://10.101.28.56:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
seems there is a problem for the api server to talk to metrics server.
Just wondering if metrics server supports 1.15?

@metri
Copy link

metri commented Jul 22, 2019

Hi folks! Helped for me add options --enable-aggregator-routing=true in manifest kube-apiserver.

More details here kubernetes/kubernetes#56430 (comment)

@staticdev
Copy link
Author

staticdev commented Jul 24, 2019

@metri I added to the deployment in metrics-server/deploy/1.8+/metrics-server-deployment.yaml

command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP

It now works.. does your change have the same effects?

@beanssoft
Copy link

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2019
@shundezhang
Copy link

@staticdev how do you configure kube-apiserver?

@metri
Copy link

metri commented Jul 25, 2019

@staticdev
Yes. I made the same changes as you, but it did not help me (for some reason it seemed to me that you also made them, although you did not specify it). Then began to understand already more deeply, in kube-apiserver why doesn't work. As a result, I found a similar problem and the answer there.

PS: Sorry for my english.

@staticdev
Copy link
Author

@shundezhang I am using default kubeadm configuration with calico 3.8 also on default.

@staticdev
Copy link
Author

@metri I had to kubectl delete metrics-server/deploy/1.8+ and then kubectl create metrics-server/deploy/1.8+ after the changes. After some seconds you should be able to see results from kubectl top node (make sure metrics-server pods are running with kubectl get pods --all-namespaces).

@greg9702
Copy link

greg9702 commented Jul 27, 2019

@staticdev Can you show your entire metrics-server-deployment.yaml?
I am using 1.15.0, tried everything and can not make it working.
Still got "error: metrics not available yet"

Edit.
Had to wait couple minutes and it works.
Thanks @staticdev for your advices.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 26, 2019
@pierluigilenoci
Copy link
Contributor

Have you tried to use this: #278 (comment)

@staticdev
Copy link
Author

@PierluigiLenociAkelius I used (as in the comments above):

command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@HYChou0515
Copy link

To be more explicit from @staticdev 's suggestion, I modified metrics-server/deploy/1.8+/metrics-server-deployment.yaml and solve this issue

...
apiVersion: extensions/v1beta1
...
spec:
  template:
    spec:
      containers:
        command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
...

@MIBc
Copy link
Contributor

MIBc commented Dec 13, 2019

@shundezhang do you have solved the problem ?

@shundezhang
Copy link

@MIBc yes, it was an MTU issue. Had to adjust that in calico config.

@nurhun
Copy link

nurhun commented Jul 7, 2021

To be more explicit from @staticdev 's suggestion, I modified metrics-server/deploy/1.8+/metrics-server-deployment.yaml and solve this issue

...
apiVersion: extensions/v1beta1
...
spec:
  template:
    spec:
      containers:
        command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
...

I'm running k3s cluster. I modified the static metric yaml at /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yam with commands above .. but this doesn't solve the issue.

Note: k3s runs by defauly with flannel CNI. Any ideas ?!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests