-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm init errors out kubeadm/app/phases/addons/dns/dns.go #2423
Comments
hi,
this looks like the output of the
downloading the coredns image would not have solved the preflight error that you are getting. /kind support |
i think something else is going on in your setup. what configuration are you passing to kubeadm --config? |
If user upgrades dns to v1.8.0 before |
from my understanding, the user manually setting coredns to v1.8.0 in an old cluster, then running kubeadm upgrade that cannot use v1.8.0 as the base coredns version will fail. the reason for that is that the coredns migration library for 1.20 would not recognize 1.8.x as the base version - therefore cannot migrate it.
not sure if this is the same as the i don't think there is a need to backport it, since the long term we want coredns to be deployed and upgrade by this operator: |
Sorry for the delay, got busy. This is an init for an HA install with and external ETCD on a freshly installed bare metal Centos 7. Not an upgrade. When the error occurs there is an API call. One interesting point is: "containers":[{ The full text: I0410 22:44:26.253319 21432 round_trippers.go:454] GET https://192.168.4.15:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 200 OK in 4 milliseconds |
kubeadm 1.20.5 does not support coredns 1.8.3. what is your configuration file that you pass to |
looking at your logs seems to be the case.. if you strictly want coredns 1.8.3, use kubeadm 1.21. |
kubeadm version kubeadm init --config /kubernetes/01.kubeadm-config.yaml --upload-certs --v=10 apiVersion: kubeadm.k8s.io/v1beta2 |
we are not seeing problems with coredns 1.8.3 and kubeadm 1.21 in our CI. are you seeing this same replicaset problem with kubeadm 1.21?
and try to understand more about your problem. |
okay, that's interesting. When the init had the error I thought there was no master node install done. After trying your suggested command, I looked at the running config and I see pods that were not installed as part of this install. What I think is happening is the installation is getting config information from the etcd. The external etcd is one that I am reusing from a previous cluster installation. This could be why it's looking like an upgrade when the command I am issuing is an init. I need to figure out how to flush the old data in the etcd. Thanks for your help. I'll come back and comment after I've dealt with the etcd and let you know how it turns out. |
Flushing the data out of the etcd solved it. I manually deleted everything using kubectl to clear out etcd. Thanks again! |
Is this a request for help?
Not sure.
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:02:01Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
uname -a
):Linux SERVERNAME 4.18.0-240.15.1.el8_3.x86_64 #1 SMP Mon Mar 1 17:16:16 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
What happened?
I ran kubeadm init on a fresh bare metal install and I got the error:
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0327 17:28:24.778708 17181 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
start version '1.8.3' not supported
unable to get list of changes to the configuration.
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.isCoreDNSConfigMapMigrationRequired
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:387
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:274
To troubleshoot I also tried an update to the failed init and got:
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[preflight] Some fatal errors occurred:
[ERROR CoreDNSUnsupportedPlugins]: start version '1.8.3' not supported
[ERROR CoreDNSMigration]: CoreDNS will not be upgraded: start version '1.8.3' not supported
[preflight] If you know what you are doing, you can make a check non-fatal with
--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
What you expected to happen?
I was expecting the init to finish and provide join commands for the other masters/workers
How to reproduce it (as minimally and precisely as possible)?
As far as I can tell, just run the kubeadm init for kubernetes v1.20.5.
I am setting up a bare metal multi-master
Anything else we need to know?
From what I can piece together, it looks like kubeadm downloads coreDNS container version 1.7.0 as expected, but the latest version of coreDNS is 1.8.3, which is not supported. During the finalization of the kubeadm init, it's either trying to get the latest coreDNS and failing, or it's looking at the latest version and failing. coreDNS 1.8.3 does not get downloaded into the local docker, only 1.7.0 does.
I tried manually downloading coreDNS 1.8.3:
docker pull coredns/coredns:1.8.3
and reran the init (after a reset), and it worked!
Something strange going on with the init
The text was updated successfully, but these errors were encountered: