-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Master Failed : FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration #5227
Comments
On running this /opt/bin/kubeadm --kubeconfig /etc/kubernetes/admin.conf token create I0930 18:00:52.823950 13595 token.go:115] [token] validating mixed arguments |
I'm having this problem as well Ubuntu verstion 18.0.4 LTS my host.ini file: [kube-master] [etcd] [kube-node] [calico-rr] [k8s-cluster:children] |
+1 |
I've also seen this problem with the kubeadm token occur after upgrading the underlying OS to Ubuntu 18.04.4 and adding a new node using the scale.yml playbook (using kubespray 2.11.0) Looking at the logs on the nodes using
I fixed this by setting
Possible root cause: It appears that the The warning message can be resolved by updating the kernel command line boot options and updating grub as described the end of the article at https://docs.docker.com/install/linux/linux-postinstall/ under the section on Your kernel does not support cgroup swap limit capabilities. I haven't tried re-running the playbook to confirm if this fixes the issue as manually setting the |
We also ran into the kubelet cgroup driver issue on Ubuntu 18 and Kubespray 2.11.0, and the fix samchal mentioned did work for us, but that was unrelated to the kubeadm token issue. In our case, we were trying to reconfigure the cluster with an |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Don't run the ansible-playbook from a master node use another separate VM to manipulate your cluster |
I resolved it when I did:
|
Environment:
Cloud provider or hardware configuration:
**OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): "Ubuntu 16.04.3 LTS"Version of Ansible (
ansible --version
): ansible 2.7.12**Kubespray version (commit) (
git rev-parse --short HEAD
): 8712bddNetwork plugin used: calico
Copy of your inventory file:
all]
master-1 ansible_host=161.92.248.32 ip=161.92.248.32 ansible_user=philips ansible_sudo=yes
worker-1 ansible_host=161.92.248.33 ip=161.92.248.33 ansible_user=philips ansible_sudo=yes
[kube-master]
master-1
[kube-node]
worker-1
[etcd]
master-1
[k8s-cluster:children]
kube-master
kube-node
Command used to invoke ansible:
ansible-playbook -b --ask-become-pass --become-user=root -i inventory/mycluster/inventory.ini cluster.yml
Output of ansible run:
TASK [kubernetes/master : Create kubeadm token for joining nodes with 24h expiration (default)] **********************************************************************************************
Monday 30 September 2019 16:39:05 +0530 (0:00:00.100) 0:03:49.003 ******
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (5 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (3 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (2 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (1 retries left).
fatal: [master-1 -> 161.92.248.32]: FAILED! => {"attempts": 5, "changed": true, "cmd": ["/opt/bin/kubeadm", "--kubeconfig", "/etc/kubernetes/admin.conf", "token", "create"], "delta": "0:01:15.204831", "end": "2019-09-30 16:47:09.210040", "msg": "non-zero return code", "rc": 1, "start": "2019-09-30 16:45:54.005209", "stderr": "timed out waiting for the condition", "stderr_lines": ["timed out waiting for the condition"], "stdout": "", "stdout_lines": []}
The text was updated successfully, but these errors were encountered: