Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow more options for load balancing controlplane nodes externally and internally #1682

Closed
superseb opened this issue Oct 4, 2019 · 23 comments

Comments

@superseb
Copy link
Contributor

superseb commented Oct 4, 2019

Previous issues/discussion in #705 and #1348

Options to implement:

  • Load balancing to controlplane nodes externally
  • Load balancing to controlplane nodes internally (from cluster nodes to kube-apiserver on controlplane nodes), this will disable nginx-proxy and use the address specified to connect to load balanced controlplane nodes.
  • Change kube-apiserver listen port

Affected logic/config:

  • Additional names in certificates so it accepts request using that name (authentication.sans)
  • Generated kubeconfig for cluster so it uses the load balanced address and not a single controlplane node
  • Generated kubecfg files on nodes pointing to controlplane nodes/nginx-proxy
  • Connectivity check to kube-api so it uses the load balances address and not a single controlplane node
loadbalancer:
  // External name to be used in generated kubeconfig and in certificates for kube-apiserver
  kubeapi_external_fqdn: kubeapi.yourdomain.com
  // Optional, listen port for kubeapi_external_fqdn which is configured at the external load balancer
  kubeapi_external_port: 8443
  // Optional, disables nginx-proxy on the nodes and uses this in the node's kubecfg
  kubeapi_internal_fqdn: kubeapi-internal.yourdomain.com
  // Optional, uses this port to connect to kubeapi
  kubeapi_internal_port: 9443

While we are at it, we might as well make kube-apiserver port configurable so everything can be adjusted network wise. This is currently hardcoded to 6443. For this we can either use --secure-port or add another key under kube-api (listen_port). Cons of the first are tying it a specific parameter, cons of the other is adding another key. We probably need the new key, which we can then also use for the kubecfg/nginx-proxy when we dont use any load balancing.

services:
  kube-api:
    listen_port: 7443

When this design is accepted, I'll put in the steps needed to make this work so it can be worked on externally or internally.

@mchucklee
Copy link

Looking forward to this feature, tks a lot

@markrevill
Copy link

Hi guys,

I am currently building out an RKE cluster (to host Rancher) on Azure VMs behind an external Load balancer and running the client via a bastion host.

Would love to know when if/when this feature request is likely to be merged as it would add a lot of flexibility to the outputted kube_config_cluster.yml

Cheers,
Mark.

@fredleger
Copy link

Any chance to see this implemented soon ? Look like it's a must have when building an rke cluster in private networks

@nightmareze1
Copy link

+1

@stale
Copy link

stale bot commented Oct 8, 2020

This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale label Oct 8, 2020
@immanuelfodor
Copy link

Unstale

@aledegano
Copy link

Thank you for the design!
This is exactly what we need, I hope this discussion can move on.

@carloscarnero
Copy link

This feature would be really nice, especially for on-premises folks. I have managed to get load balancing working by leveraging Nginx in TCP mode and the SANS feature; however, everytime the kube config file changes (via an rke operation) I have to go and modify the API server address.

@immanuelfodor
Copy link

Same here. I'm using Kube Karp for this purpose (https://github.com/immanuelfodor/kube-karp) and as I'm version tracking the RKE and kube config files, I can git add -p the changes after RKE up, do not select the modified API server URL hunk, then git checkout -- kubeconfig to restore the original version. It would be great to get rid of the longer workaround, and just git add everything :)

@josesolisrosales
Copy link

Are there any updates on this feature?

@neuromantik33
Copy link

Native kube-vip support would be great! Kube karp looks nice but still requires some of us to install metallb for load balancer support. Why not both? ;)

@victort
Copy link

victort commented Jul 23, 2021

bump

@zbialik
Copy link

zbialik commented Aug 12, 2021

Question: isn't this needed for making a rancher cluster (deployed on rke cluster) highly-available?

The docs don't mention anything about updating kube-api args for setting up rancher in HA (on an rke cluster).

@aslafy-z
Copy link
Contributor

Any news on this issue? I guess this is the reason why Rancher cordon, drains and update all the worker nodes when adding or removing a leader node.

@Sissi44
Copy link

Sissi44 commented Dec 8, 2021

Same issue for us too. Is that possible to work on the improvement of node/nginx-proxy ? We also need this feature ~

@ghost
Copy link

ghost commented Jun 9, 2022

Is there any update on that?

@aslafy-z
Copy link
Contributor

@superseb can we hope this to be implemented some day? I worked on some basic implementation at #2853 but currently stuck by the lack of contributor documentation on how to add features and build a working rancher version that includes it. Is this issue still something you're ready to work on? If Rancher mgmt decided to stop working on RKE1, can you close this issue and make a clear announcement? Thank you.

@superseb
Copy link
Contributor Author

superseb commented Mar 7, 2023

Sorry I tried to work on this but I am not on the team that owns RKE anymore.

@aslafy-z
Copy link
Contributor

aslafy-z commented Mar 7, 2023

@superseb May you forward this long running issue to the right team?

@github-actions
Copy link
Contributor

github-actions bot commented May 7, 2023

This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions.

@fredleger
Copy link

not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions.

activity !

@github-actions
Copy link
Contributor

This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions.

@aslafy-z
Copy link
Contributor

Please reopen this issue! Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests