Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics not available in Kubernetes 1.16 : kubectl top pods result in error #300

Closed
mishaque opened this issue Aug 13, 2019 · 25 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@mishaque
Copy link

I get following error when running top pods:

W0813 11:23:15.605148 18433 top_pod.go:266] Metrics not available for pod default/metric-service-metrics-server-687d884796-5w6fm, age: 12h41m30.605128s
error: Metrics not available for pod default/metric-service-metrics-server-687d884796-5w6fm, age: 12h41m30.605128

@OsandaDeemantha
Copy link

kubectl top pods
Result:

W0827 14:26:49.076592   81772 top_pod.go:266] Metrics not available for pod default/busybox, age: 332h53m37.076586606s
error: Metrics not available for pod default/busybox, age: 332h53m37.076586606s

kubectl logs <metrics-server-pod-name>
Result:
E0827 13:37:10.001523 1 reststorage.go:147] unable to fetch pod metrics for pod default/busybox: no metrics known for pod

I face the same issue when I run kubectl top pods command which worked earlier without an issue. Initially, I had about 2-5 pods and now it has around 15-20 pods. top pods command doesnt work now but top nodes command works correctly.
I tried with increasing replicas and editing the metrics-server-deployment.yaml file and adding parameters below to fix the error. but it also doesn't work for me.
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP

@pierluigilenoci
Copy link
Contributor

Have you tried to use this: #278 (comment)

@serathius
Copy link
Contributor

serathius commented Nov 5, 2019

Would you be able to provide more logs from metrics-server?

E0827 13:37:10.001523 1 reststorage.go:147] unable to fetch pod metrics for pod default/busybox: no metrics known for pod

doesn't point to cause of problem, just the result.

@serathius serathius added the kind/bug Categorizes issue or PR as related to a bug. label Dec 12, 2019
@aguinaldoabbj
Copy link

Very same problem here with 1.16 and metrics-server 0.3.6 1.8+.

@ysicing
Copy link

ysicing commented Dec 25, 2019

Very same problem here with 1.17 and metrics-server 0.3.6 1.8+.

E1225 07:31:57.207998       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/node-exporter-vwz9c: no metrics known for pod
E1225 07:31:57.208002       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/alertmanager-main-0: no metrics known for pod
E1225 07:31:57.208007       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/prometheus-adapter-68c696c6bc-cb5nf: no metrics known for pod
E1225 07:31:57.208012       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/node-exporter-brv9c: no metrics known for pod
E1225 07:31:57.208017       1 reststorage.go:160] unable to fetch pod metrics for pod kubernetes-dashboard/dashboard-metrics-scraper-7945b64478-jbclf: no metrics known for pod
E1225 07:31:57.208021       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/kube-controller-manager-local.72.80.k8s.talk.me: no metrics known for pod

it work for me.

image

@petertsu
Copy link

The following fixes the issue for 0.3.6 1.8+ with k8s v1.14.10

     - --kubelet-insecure-tls=true
      - --kubelet-preferred-address-types=InternalIP

@Nurlan199206
Copy link

@petertsu works on 1.16... thank you

@pierluigilenoci
Copy link
Contributor

@serathius fixed?

@onedr0p
Copy link

onedr0p commented Feb 7, 2020

No fix, apparently this is a wontfix issue. He closed all related issues.

@serathius serathius added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Feb 7, 2020
@serathius
Copy link
Contributor

Closing per Kubernetes issue triage policy

GitHub is not the right place for support requests.
If you're looking for help, check Stack Overflow and the troubleshooting guide.
You can also post your question on the Kubernetes Slack or the Discuss Kubernetes forum.
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.

Explanation here: #425
You can post questions on #sig-instrumentation slack channel.

@onedr0p
Copy link

onedr0p commented Feb 7, 2020

Ok cool, create a new rule and close all "support issues". Sounds like a good idea to me! /s

This is clearly a BUG with metrics-server.

@DerrickMartinez
Copy link

DerrickMartinez commented Feb 7, 2020

Major issues with metrics-server now, it's buggy and I'm looking to get off of it in my environments

@serathius
Copy link
Contributor

serathius commented Feb 7, 2020

Thanks for your feedback. Sorry for closing this issue, but those policy are there for a reason (we were just not very good at respecting them). There are better ways to solve current problems with metrics server then trying to debug every cluster setup.

This issue is good example of author abandoning it without giving any specifics. Without specifics on how to reproduce the issue it's impossible to find and fix bugs.

Closing this issue doesn't mean that we should leave the problem unsolved. I think that most problems people encounter with metrics-server have solution hidden somewhere in closed issues. We should make this knowledge more accessible -> documentation.

I will be looking for help with creating better documentation. Hope that you will help me making metrics-server easy to run for everyone.

@pierluigilenoci
Copy link
Contributor

@mishaque ?

@SHUFIL
Copy link

SHUFIL commented Apr 1, 2020

in my case am using kops in aws , and kubectl top pod getting fine, but metric-server log always given below error `

metrics not available for pod default/hellow-world-deployment-5464gu

` .due to this error my auto-scaling not working.

for resolving i have added below line in metric-server-deployent yml file , but its still getting issue, how to resolve.

 command:
 - --kubelet-insecure-tls=true
 - --kubelet-preferred-address-types=InternalIP

@serathius
Copy link
Contributor

@SHUFIL Even thou your problem looks similar, it can have totally different cause.
Please open a separate issue, where you can provide more details about your setup.

@dcrearer
Copy link

adding

  • --kubelet-insecure-tls=true
  • --kubelet-preferred-address-types=InternalIP

to the metric deployment yaml corrected this issue for me as well

@tirelibirefe
Copy link

even if you close these support requests, there is a major issue, it doesn't work.
Although I set --kubelet-insecure-tls=true --kubelet-preferred-address-types=InternalIP, it still gives "unable to fetch pod metrics for pod..." error.

@serathius
Copy link
Contributor

serathius commented Jul 27, 2020

Hey @tirelibirefe
Thanks for bringing this up, I understand that having Metrics Server not work out of the box is not a great experience. Unfortunately due overwhelming size of Apiserver & Kubelet configuration matrix it's impossible to achieve a solution that works for everyone.

Configuring Metrics Server requires substantial knowledge about how Kubelet is configured in your cluster. For example --kubelet-insecure-tls=true would solve cases where Kubelet certificates are not signed by CA that MS is aware, --kubelet-preferred-address-types=InternalIP would change which kubelet address is used, replacing default Hostname which assumes that you have DNS configured resolve within cluster.

All suggestions found it those issues help some cases, but applying them at random doesn't help. Some comments are up-voted just because they solve problems that appear more commonly or apply to popular K8s distributions like kops. I don't agree with approach where we post just random combinations of flags and ask users to hope that one of them will work for them. I tried to reach out to popular K8s distribution maintainers to work with them on testing configuration and creating instructions, but it's hard to cover all cases.

For now I proposed an approach where if someone is really unable to configure MS they can fill out template https://raw.githubusercontent.com/kubernetes-sigs/metrics-server/master/.github/ISSUE_TEMPLATE/bug-report.md requiring to provide all information needed to debug the problem. With this I and other can really help and work with that person. Of course there could be a better way to solve it, I would be really interested in discussing it.

About your issue #561 "unable to fetch pod metrics for pod..." clearly points to either problem with network or configuration of Kubelet/Metrics Server. But by not filling provided template and removing the end of the string, you basically prevented anyone from finding root cause of issue. We already had some success with this approach, for example similar issue #457 provided information which was enough to root cause the issue. I would like to understand the reason why you didn't fill out the template?

Please make sure that when requesting support you provide as much information as possible (obfuscating sensible information is ok) and working with people that try to help you. I know that it's frustrating if something doesn't work, but please remember that people trying to help you are usually using their own personal time.

@mayankkapoor
Copy link

Very same problem here with 1.17 and metrics-server 0.3.6 1.8+.

E1225 07:31:57.207998       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/node-exporter-vwz9c: no metrics known for pod
E1225 07:31:57.208002       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/alertmanager-main-0: no metrics known for pod
E1225 07:31:57.208007       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/prometheus-adapter-68c696c6bc-cb5nf: no metrics known for pod
E1225 07:31:57.208012       1 reststorage.go:160] unable to fetch pod metrics for pod monitoring/node-exporter-brv9c: no metrics known for pod
E1225 07:31:57.208017       1 reststorage.go:160] unable to fetch pod metrics for pod kubernetes-dashboard/dashboard-metrics-scraper-7945b64478-jbclf: no metrics known for pod
E1225 07:31:57.208021       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/kube-controller-manager-local.72.80.k8s.talk.me: no metrics known for pod

it work for me.

image

This worked for me on a ClusterAPI cluster with k8s v1.19 and Calico. --kubelet-insecure-tls=true helps with the kubelet insecure certificate, and --kubelet-preferred-address-types=InternalIP helps with Calico CNI.

@jonycw
Copy link

jonycw commented Oct 27, 2020

1.19和metrics-server 0.3.6 1.8+的问题也差不多 top_pod.go:265] Metrics not available for pod

@fliphess
Copy link

I would love to see this bug fixed in the documentation or the code.

@eduardobaitello
Copy link

Same problem here when using Minikube with minikube addons enable metrics-server.

When a pod restart, metrics become available for 2 minutes in kubectl top. After this time the error: Metrics not available for pod problem starts.

@serathius
Copy link
Contributor

Minikube addon for metrics-server is maintained by minikube. Please post issues related to addon on https://github.com/kubernetes/minikube

@yimiaoxiehou
Copy link

hey, guys. i also miss this problem on docker(19.3.14) and k8s(v1.20.9).

i find that is becouse my docker root dir is changed no default(/var/lib/docker)
that make kubelet can not read cgroup info . after add args for kubelet --docker-root=/mydocker-root . the problem fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests