-
Notifications
You must be signed in to change notification settings - Fork 219
Instructions for multi master #311
Comments
For Master k8s components you should only need to give another node the
Then you can either create a LoadBalancer with external IP to the kubernetes service (default namespace) or point external DNS to one or all of the API server nodes for kubectl clients to use. For self-hosted-etcd you can try the steps in the etcd-operator README. I haven't tried that yet myself. |
As @bzub points out, simply labeling the node as a master will start master components on these nodes. The main change is that you need some way of addressing your multiple api-servers from a single address. kubeconfig only supports a single api-server address, and even though you can specify multiple on the kubelet command line, only the first is really used. So a loadbalancer which fronts all api-servers (master nodes), or DNS entry which maps to those nodes are usually good options. You would then set your api server address in the kubeconfig to point to the dns or loadbalancer. There is also somewhat of a limitation in the internal kubernetes service where the multiple api-servers will all overwrite eachother as the only endpoint. (To see this This isn't the worst thing in the world, but it's not ideal (there is work to resolve this upstream). In the interim, an option is to also use your loadbalancer/dns entry for this endpoint as well, which can be done by setting the apiserver |
I'm just starting to implement an automated HA failover system for kube-apiserver with keepalived-vip and @aaronlevy your comment about the default kubernetes service was very enlightening. I really would have overlooked that issue, as limited as it is. Looking into it further I found that the correct behavior for the kubernetes api service is enabled by editing the kube-apiserver DaemonSet and passing the It's unfortunate that this isn't mentioned in the primary Kubernetes High-Availability document. Also, please be warned if you try keepalived-vip's README example that the examples/echoheaders.yaml manifest has an improper |
@bzub be careful about using the Essentially you're putting a fixed number of endpoints into the kubernetes service, and if those endpoints happen to be down, a certain percentage of requests just fail (because the endpoints are not cleaned up). |
I'm currently using Maybe we could add that pod to bootkube? We just miss a dynamic config writer which include all the master nodes in the |
Proof of concept: #684 |
Correct me if I'm wrong -- this results in a single nginx pod, yes? Won't that potentially move around the cluster and thus the HA endpoint IP will change? (Actually, if it's just a Pod with no Deployment in front of it, won't it just go down if the current node fails? Pods explicitly do not survive node failures.) I like something like keepalived-vip better or even a full cluster service like Pacemaker managing things so that the IP never changes but follows the LB provider around. Alternately maybe the Pod could be a single-replica Deployment with an init container to handle registering the current IP with DNS. |
Are you referring to #684? #684 runs a nginx-prox pod on every node and listen on localhost, you would then connect to the API server through localhost. |
I realise this is an ancient bug, but just in case anyone is still reading it:
No need for nginx or keepalived, failover is automatic (with at worst a TCP connect retry for external clients) - except for updating the external round-robin DNS record. Just make updating that DNS entry part of your master node replacement process (which is already somewhat special-cased). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Greetings,
I have been trying to ask this on irc as well as the k8s slack, but am restoring to a ticket here. I apologize.
I wanted to know if a multi master setup is possible with bootkube, if so how do i do it, especially with the experimental etcd flag set.
Just need someone to point me in the right direction.
Thanks
The text was updated successfully, but these errors were encountered: