You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue also causes the etcd version to remain unchanged after executing the upgrade_cluster.yml playbook when etcd_deployment_type: 'kubeadm'.
Since kubernetes-sigs/kubespray#11352 removed the use of the --config flag to use the updated kubeadm-config.yaml, the etcd pod manifest is not upgraded during the kubeadm upgrade since it will use the kubeadm-config configmap which is not reflecting the new etcd image version.
What happened?
Recently, the --config option was removed from kubeadm upgrade following deprecation cycle:
This introduces a regression because now there is no playbook to reconfigure kubeadm
What did you expect to happen?
upgrade-cluster.yml playbook reconfigures kubeadm
How can we reproduce it (as minimally and precisely as possible)?
kube_apiserver_pod_eviction_not_ready_timeout_seconds
OS
Version of Ansible
Version of Python
Python 3.10.12
Version of Kubespray (commit)
f9ebd45
Network plugin used
calico
Full inventory with variables
not relevant
Command used to invoke ansible
ansible-playbook -i inventory/mycluster/hosts.yaml upgrade-cluster.yml
Output of ansible run
not relevant
Anything else we need to know
I'm not sure if we should completely separate this feature inside a dedicated playbook or reintroduce this behavior with a command such as:
That topic has been a bit discussed here:
The text was updated successfully, but these errors were encountered: