-
Notifications
You must be signed in to change notification settings - Fork 588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow more options for load balancing controlplane nodes externally and internally #1682
Comments
Looking forward to this feature, tks a lot |
Hi guys, I am currently building out an RKE cluster (to host Rancher) on Azure VMs behind an external Load balancer and running the client via a bastion host. Would love to know when if/when this feature request is likely to be merged as it would add a lot of flexibility to the outputted Cheers, |
Any chance to see this implemented soon ? Look like it's a must have when building an rke cluster in private networks |
+1 |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Unstale |
Thank you for the design! |
This feature would be really nice, especially for on-premises folks. I have managed to get load balancing working by leveraging Nginx in TCP mode and the SANS feature; however, everytime the kube config file changes (via an rke operation) I have to go and modify the API server address. |
Same here. I'm using Kube Karp for this purpose (https://github.com/immanuelfodor/kube-karp) and as I'm version tracking the RKE and kube config files, I can |
Are there any updates on this feature? |
Native kube-vip support would be great! Kube karp looks nice but still requires some of us to install metallb for load balancer support. Why not both? ;) |
bump |
Question: isn't this needed for making a rancher cluster (deployed on rke cluster) highly-available? The docs don't mention anything about updating kube-api args for setting up rancher in HA (on an rke cluster). |
Any news on this issue? I guess this is the reason why Rancher cordon, drains and update all the worker nodes when adding or removing a leader node. |
Same issue for us too. Is that possible to work on the improvement of node/nginx-proxy ? We also need this feature ~ |
Is there any update on that? |
@superseb can we hope this to be implemented some day? I worked on some basic implementation at #2853 but currently stuck by the lack of contributor documentation on how to add features and build a working rancher version that includes it. Is this issue still something you're ready to work on? If Rancher mgmt decided to stop working on RKE1, can you close this issue and make a clear announcement? Thank you. |
Sorry I tried to work on this but I am not on the team that owns RKE anymore. |
@superseb May you forward this long running issue to the right team? |
This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions. |
activity ! |
This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions. |
Please reopen this issue! Thank you |
Previous issues/discussion in #705 and #1348
Options to implement:
nginx-proxy
and use the address specified to connect to load balanced controlplane nodes.Affected logic/config:
While we are at it, we might as well make kube-apiserver port configurable so everything can be adjusted network wise. This is currently hardcoded to 6443. For this we can either use
--secure-port
or add another key underkube-api
(listen_port
). Cons of the first are tying it a specific parameter, cons of the other is adding another key. We probably need the new key, which we can then also use for the kubecfg/nginx-proxy when we dont use any load balancing.When this design is accepted, I'll put in the steps needed to make this work so it can be worked on externally or internally.
The text was updated successfully, but these errors were encountered: