You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, so I tried using RKE to deploy an HA cluster with 5 nodes, currently all are cp, etcd and workers but I'll separate them in the future. My yaml file looks roughly like this:
Once I run rke up, the setup succeeds, however I notice that if I check the resulted kubeconfig file, the server key has the IP address of the last node, aka 1.2.3.8 in this case. That means that should this node go down, the entire cluster goes down with it since, or at least I am unable to update or create new objects in it.
I was able to surpass this by using a virtual IP address on the last node, and configured it to move between the nodes should it go down, and now I am indeed able to power off the last node without killing the cluster, but I have a feeling that this is not the ideal solution.
Are there any guidelines about how to go about this issue?
Thanks ahead!
The text was updated successfully, but these errors were encountered:
Hi there, so I tried using RKE to deploy an HA cluster with 5 nodes, currently all are cp, etcd and workers but I'll separate them in the future. My yaml file looks roughly like this:
Once I run
rke up
, the setup succeeds, however I notice that if I check the resulted kubeconfig file, theserver
key has the IP address of the last node, aka 1.2.3.8 in this case. That means that should this node go down, the entire cluster goes down with it since, or at least I am unable to update or create new objects in it.I was able to surpass this by using a virtual IP address on the last node, and configured it to move between the nodes should it go down, and now I am indeed able to power off the last node without killing the cluster, but I have a feeling that this is not the ideal solution.
Are there any guidelines about how to go about this issue?
Thanks ahead!
The text was updated successfully, but these errors were encountered: