Skip to content
This repository has been archived by the owner on Jul 30, 2021. It is now read-only.

hack/quickstart: Add automatic self-hosted multi-master loadbalancer #684

Closed
wants to merge 2 commits into from
Closed

Conversation

klausenbusk
Copy link
Contributor

If MULTI_MASTER=true is specified, init-{master,node}.sh add a static
nginx-proxy manifest, change kubeconfig server to 127.0.0.1:6443 and
add a nginx.conf file.

For the first master (init-master.sh) it also inject a manifest file
which add nginx-conf-syncer and nginx-conf-checkpointer.
The nginx.conf for the first node just redirect 127.0.0.1:6443 -> 127.0.0.1:443.

nginx-conf-syncer generate a nginx.conf every minute, by using the
template "nginx.conf.template" with sigil from the configmap nginx.conf.
The nginx.conf is saved in the same configmap as "nginx.conf".
It get the master node by looking at the node with the label:
node-role.kubernetes.io/master

nginx-conf-syncer pull the "nginx.conf" and compare it to the
nginx.conf on disk, if they are different it update the file on
disk and tell nginx to reload the config.

Could fix: #311

@coreosbot
Copy link

Can one of the admins verify this patch?

5 similar comments
@coreosbot
Copy link

Can one of the admins verify this patch?

@coreosbot
Copy link

Can one of the admins verify this patch?

@coreosbot
Copy link

Can one of the admins verify this patch?

@coreosbot
Copy link

Can one of the admins verify this patch?

@coreosbot
Copy link

Can one of the admins verify this patch?

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 13, 2017
@aaronlevy
Copy link
Contributor

I appreciate the effort here - but I'm pretty hesitant about going this route for our quickstart examples.

A goal of self-hosting is to have as little on-host configuration as possible. Ideally we only require docker/kubelet/kubeconfig. As long as we have other requirements, then it makes updating those components difficult (e.g. we now would have to out-of-band keep the nginx static pod up to date).

So we could go down the route of checkpointing the nginx pods as well -- but we start to go down a path where we should really be looking at improving this behavior upstream - like smarter client code, and/or being able to reference multiple api-servers in a kubeconfig, etc.

Right now the options are essentially DNS records in front of your apiserver. Or a load-balancer of some sort (which this accomplishes in one way). However, I'd rather not be prescriptive about how this was done - and instead I feel like we need to document the options / best practices. This could even be one option (run a local proxy that knows how to evaluate apiserver endpoint records).

If MULTI_MASTER=true is specified, init-{master,node}.sh add a static
nginx-proxy manifest, change kubeconfig server to 127.0.0.1:6443 and
add a nginx.conf file.

For the first master (init-master.sh) it also inject a manifest file
which add nginx-conf-syncer and nginx-conf-checkpointer.
The nginx.conf for the first node just redirect 127.0.0.1:6443 -> 127.0.0.1:443.

nginx-conf-syncer generate a nginx.conf every minute, by using the
template "nginx.conf.template" with sigil from the configmap nginx.conf.
The nginx.conf is saved in the same configmap as "nginx.conf".
It get the master node by looking at the node with the label:
node-role.kubernetes.io/master

nginx-conf-syncer pull the "nginx.conf" and compare it to the
nginx.conf on disk, if they are different it update the file on
disk and tell nginx to reload the config.

Could fix: #311
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 28, 2017
@klausenbusk
Copy link
Contributor Author

A goal of self-hosting is to have as little on-host configuration as possible. Ideally we only require docker/kubelet/kubeconfig.

Ideally, sadly we aren't there yet :/

As long as we have other requirements, then it makes updating those components difficult (e.g. we now would have to out-of-band keep the nginx static pod up to date).

You correct. Just for the record, kubespray use this nginx-proxy solution: https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md

So we could go down the route of checkpointing the nginx pods as well -- but we start to go down a path where we should really be looking at improving this behavior upstream - like smarter client code, and/or being able to reference multiple api-servers in a kubeconfig, etc.

Hopefully 1.9 will include the needed logic/fix: kubernetes/kubeadm#261 (comment)

Right now the options are essentially DNS records in front of your apiserver. Or a load-balancer of some sort (which this accomplishes in one way).

I through of letting the nginx-ingress controller handle the API HA/lb, but I will end up in a chicken-egg situation if the cluster goes down.

However, I'd rather not be prescriptive about how this was done - and instead I feel like we need to document the options / best practices. This could even be one option (run a local proxy that knows how to evaluate apiserver endpoint records).

Hmm..

Another option is using the default kubernetes service at 10.3.0.1, but again we end up in a chicken-egg situation. kubelet can't connect to API server, as the overlay network isn't running, and kubelet doesn't know it has to start the overlay network as it can't connect to the API server.

I have changed the code to use 10.3.0.1 as the "primary" server, and the external masters ip as backup servers. The external master ip is only using until the overlay network is up and running.

I also dropped the configmap mesh, the static nginx pod manage everything now. This also reduce the number of new lines from 201 to 125, and I can reduce that even further to around ~ 100 lines, by dropping the logic which keep nginx config up-to-date (that part of change is a bit ugly).

Why do you think? Should I close this PR, or what do you think? I can reduce the change to around ~ 100 lines..

@aaronlevy
Copy link
Contributor

It's not so much about lines of code added in this case. More that this isn't the kind of decision that I'd really want bootkube to be prescriptive about.

For example, if I personally were running an HA cluster - there is almost no situation in which I don't have access to either DNS and/or loadbalancers.

However, there are situations where it would be nice to just have "a bunch of linux boxes" and not rely on cloud-provider or external services. But I feel like this is a problem to solve upstream. In the kubeadm HA doc - seemed like a lot of options are still under discussion: https://docs.google.com/document/d/1ff70as-CXWeRov8MCUO7UwT-MwIQ_2A0gNNDKpF39U4

Again, I really appreciate the effort here - but I just really don't think this is something for bootkube to dictate - and reducing the friction should be something focused on upstream. So I think in the current form, we wouldn't merge this. But if you were interested in converting this into documentation as a start to the "HA" docs - this is definitely a viable approach that people might want to use.

(Kind of a side note): another goal is that there is functionally no difference between a single-master cluster, and a multi-master cluster (e.g. with self-hosting a single master becomes a multi-master simply by adding a label and taint). If we introduce fundamental differences - then you're somewhat stuck with whatever you chose at install time (and there isn't necessarily a reason for that).

@aaronlevy aaronlevy added kind/feature Categorizes issue or PR as related to a new feature. reviewed/won't fix labels Aug 29, 2017
@klausenbusk
Copy link
Contributor Author

It's not so much about lines of code added in this case. More that this isn't the kind of decision that I'd really want bootkube to be prescriptive about.

Gotcha..

But if you were interested in converting this into documentation as a start to the "HA" docs - this is definitely a viable approach that people might want to use.

Writing is way harder than scripting/coding, at least for me. Maybe I gonna give it a try.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. reviewed/won't fix size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Instructions for multi master
4 participants