-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Release-1.21] Unable to start secondary etcd nodes if initial cluster member is offline #4752
Comments
Validated in k3s with RC v1.21.8-rc1+k3s1 and observed that in a 3 node cluster, server 2 and server 3 successfully restated after being stopped while server 1 was started last Steps:
NOTE: When tested with RC v1.21.8-rc1+k3s1 it was observed that upon restarting only server 2 while server 1 and server 3 is stopped, server 2 is showing the error
Although I see that k3s is running on server 2
Please advice if this is expected @dereknola |
Awesome! That looks like correct behavior. The behavior mentioned above when just starting one node is due to quorum loss in etcd, so only starting one node does not restore quorum. |
Backport #4746 to release-1.21
The text was updated successfully, but these errors were encountered: