-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a k3s server node from a previous cluster causes 'x509: certificate signed by unknown authority' #2034
Comments
I believe this should probably be moved to k3s but I will let @cjellick decide |
This is probably an odd corner case - k3s nodes don't expect to be hot-swapped into different clusters without having stuff from the previous installation cleaned out. However, the node joining the cluster should probably fail to do so if its local certs don't match those on the other nodes. |
I have got the exact same issue, only i want to recover the whole cluster. After cluster failures. |
Tracking this is #3040 |
PR #3398 should take care of this issue as it will introduce behavior that will update the certs on disk if they don't match and are older than the certificates in the datastore. |
Is it possible to confirm which release the fix is included? Thanks! |
Bump, would like to know which version has shipped the fix. |
This issue was closed like a year ago. Every currently supported version has the fix. |
What kind of request is this (question/bug/enhancement/feature request): bug
Steps to reproduce (least amount of steps as possible):
Result:
node (a):
node (b):
k3s-serving
secret will be updated and signed by the CA on node (b).Other details that may be helpful:
7a. Delete all nodes and the k3s-serving cert
kubectl --insecure-skip-tls-verify=true delete node $(hostname -s) kubectl --insecure-skip-tls-verify=true -n kube-system delete secret k3s-serving /usr/local/bin/k3s-uninstall.sh
7b. Reinstall k3s on at least 2 nodes (for me the issue didn't recover until I added 2). Deleting the k3s-serving secret and a k3s restart may be needed.
7c. To recover from invalidated tokens I had to clear all SA tokens from all namespaces and all pods (note: many pods were stuck in Terminating, so I used forceful commands):
Cluster information
kubectl version
): v1.18.4+k3s1 (97b7a0e)gz#11262
The text was updated successfully, but these errors were encountered: