-
Notifications
You must be signed in to change notification settings - Fork 219
hack/quickstart: Add automatic self-hosted multi-master loadbalancer #684
Conversation
Can one of the admins verify this patch? |
5 similar comments
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
I appreciate the effort here - but I'm pretty hesitant about going this route for our quickstart examples. A goal of self-hosting is to have as little on-host configuration as possible. Ideally we only require docker/kubelet/kubeconfig. As long as we have other requirements, then it makes updating those components difficult (e.g. we now would have to out-of-band keep the nginx static pod up to date). So we could go down the route of checkpointing the nginx pods as well -- but we start to go down a path where we should really be looking at improving this behavior upstream - like smarter client code, and/or being able to reference multiple api-servers in a kubeconfig, etc. Right now the options are essentially DNS records in front of your apiserver. Or a load-balancer of some sort (which this accomplishes in one way). However, I'd rather not be prescriptive about how this was done - and instead I feel like we need to document the options / best practices. This could even be one option (run a local proxy that knows how to evaluate apiserver endpoint records). |
If MULTI_MASTER=true is specified, init-{master,node}.sh add a static nginx-proxy manifest, change kubeconfig server to 127.0.0.1:6443 and add a nginx.conf file. For the first master (init-master.sh) it also inject a manifest file which add nginx-conf-syncer and nginx-conf-checkpointer. The nginx.conf for the first node just redirect 127.0.0.1:6443 -> 127.0.0.1:443. nginx-conf-syncer generate a nginx.conf every minute, by using the template "nginx.conf.template" with sigil from the configmap nginx.conf. The nginx.conf is saved in the same configmap as "nginx.conf". It get the master node by looking at the node with the label: node-role.kubernetes.io/master nginx-conf-syncer pull the "nginx.conf" and compare it to the nginx.conf on disk, if they are different it update the file on disk and tell nginx to reload the config. Could fix: #311
Ideally, sadly we aren't there yet :/
You correct. Just for the record, kubespray use this nginx-proxy solution: https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md
Hopefully 1.9 will include the needed logic/fix: kubernetes/kubeadm#261 (comment)
I through of letting the
Hmm.. Another option is using the default I have changed the code to use I also dropped the Why do you think? Should I close this PR, or what do you think? I can reduce the change to around ~ 100 lines.. |
It's not so much about lines of code added in this case. More that this isn't the kind of decision that I'd really want bootkube to be prescriptive about. For example, if I personally were running an HA cluster - there is almost no situation in which I don't have access to either DNS and/or loadbalancers. However, there are situations where it would be nice to just have "a bunch of linux boxes" and not rely on cloud-provider or external services. But I feel like this is a problem to solve upstream. In the kubeadm HA doc - seemed like a lot of options are still under discussion: https://docs.google.com/document/d/1ff70as-CXWeRov8MCUO7UwT-MwIQ_2A0gNNDKpF39U4 Again, I really appreciate the effort here - but I just really don't think this is something for bootkube to dictate - and reducing the friction should be something focused on upstream. So I think in the current form, we wouldn't merge this. But if you were interested in converting this into documentation as a start to the "HA" docs - this is definitely a viable approach that people might want to use. (Kind of a side note): another goal is that there is functionally no difference between a single-master cluster, and a multi-master cluster (e.g. with self-hosting a single master becomes a multi-master simply by adding a label and taint). If we introduce fundamental differences - then you're somewhat stuck with whatever you chose at install time (and there isn't necessarily a reason for that). |
Gotcha..
Writing is way harder than scripting/coding, at least for me. Maybe I gonna give it a try. |
If MULTI_MASTER=true is specified, init-{master,node}.sh add a static
nginx-proxy manifest, change kubeconfig server to 127.0.0.1:6443 and
add a nginx.conf file.
For the first master (init-master.sh) it also inject a manifest file
which add nginx-conf-syncer and nginx-conf-checkpointer.
The nginx.conf for the first node just redirect 127.0.0.1:6443 -> 127.0.0.1:443.
nginx-conf-syncer generate a nginx.conf every minute, by using the
template "nginx.conf.template" with sigil from the configmap nginx.conf.
The nginx.conf is saved in the same configmap as "nginx.conf".
It get the master node by looking at the node with the label:
node-role.kubernetes.io/master
nginx-conf-syncer pull the "nginx.conf" and compare it to the
nginx.conf on disk, if they are different it update the file on
disk and tell nginx to reload the config.
Could fix: #311