-
-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Big refactoring #348
Big refactoring #348
Conversation
Co-authored-by: darthmaim <github@darthmaim.de>
…nto more-refactoring
You had your hetzner token visible, even if you edited it now. Its still visible in the history, I would reset it now. |
Without any context whatsoever it's really hard to guess what this comment is all about. Care to elaborate? |
This worked for me...
don't forget to add ipv4 and ipv6 under rules along with port any for your latest created k3s firewall |
This is a PR about the upcoming v2 of the tool, please see the discussion at #385 for details on some changes that you need to make to the configuration to make it compatible with the new version. Also please open an issue (rather than commenting in this PR) if you run into a problem with the new version. Thanks! |
docs say # api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer. And indeed, the user can't do this BEFORE running the script, not knowing the IP of the API LB created. But since we set that hostname into the kubeconfig, when we do save_kubeconfig(master_count) and in the next command do kubectl cluster-info, this can't work - since at that time the DNS is not configured to point to that api loadbalancer. My suggested fix is to set the IP of the LB into the kubeconfig. Then all will work and the user can, at his pace, configure his DNS to point to the api lb for that hostname - and only then adapt the kubeconfig, if wanted. SSL will work since we configure tls-sans for that hostname anyway. But kubeconfig, we can't do this for the user, having no control over if and when he configures his DNS. `
Since Ubuntu 22.10, SSH uses socket activation. This means that the SSH server isn't running by default, and gets started automatically the first time there is a connection on port 22. Upside: it saves 3 MB of RAM. Downside: if you're customizing the SSH port number with a drop-in configuration file at cloud-init time, it breaks SSH. SSH then needs to be restarted (or the machine needs to be rebooted) for the port number to be picked up by the systemd generator. The net result for hetzner-k3s is that if we use a non-default SSH port, provisioning breaks. This manifests itself by the following log message: systemd[1]: ssh.socket: Socket unit configuration has changed while unit has been running, no open socket file descriptor left. The socket unit is not functional until restarted. One possible fix is to disable socket activation and revert to the default mode (start SSH at boot). This can be done by disabling ssh.socket and enabling ssh.service. This patch does exactly that, adding the corresponding commands to the cloud-init template. This has been tested with Ubuntu 24.04 and 22.04 as well. The following link has more details on the Ubuntu change: https://discourse.ubuntu.com/t/sshd-now-uses-socket-based-activation-ubuntu-22-10-and-later/30189/14 This patch also adds a test harness to automatically create a bunch of clusters with different configurations and verify that they work correctly. It was helpful to confirm that this patch worked correctly with all the distros available on Hetzner; or rather, that the only ones that didn't work (alma-8, centos-8...) weren't working in the first place anyway.
Fix SSH when using non-default port
fix: solve timeout when api server hostname is given
… socket activation
…ll is blocked for deletion
Quality Gate passedIssues Measures |
No description provided.