Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big refactoring #348

Merged
merged 187 commits into from
Aug 18, 2024
Merged

Big refactoring #348

merged 187 commits into from
Aug 18, 2024

Conversation

vitobotta
Copy link
Owner

No description provided.

README.md Outdated Show resolved Hide resolved
@Sheldan
Copy link

Sheldan commented Aug 6, 2024

You had your hetzner token visible, even if you edited it now. Its still visible in the history, I would reset it now.

@vitobotta
Copy link
Owner Author

  • working config.yaml
hetzner_token: token
cluster_name: k3s
kubeconfig_path: "~/.kube/config"
k3s_version: v1.30.2+k3s2

public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"


ssh_allowed_networks:
  - 0.0.0.0/0
api_allowed_networks:
  - 0.0.0.0/0

networking:
  ssh:
    port: 22
    use_agent: false # set to true if your key has a passphrase
    public_key_path: "~/.ssh/id_rsa.pub"
    private_key_path: "~/.ssh/id_rsa"
  allowed_networks:
    ssh:
      - 0.0.0.0/0
    api: # this will firewall port 6443 on the nodes; it will NOT firewall the API load balancer
      - 0.0.0.0/0
  public_network:
    ipv4: true
    ipv6: true
  private_network:
    enabled : true
    subnet: 10.0.0.0/16
    existing_network_name: ""
  cni:
    enabled: true
    encryption: false
    mode: flannel

schedule_workloads_on_masters: false
masters_pool:
  instance_type: cpx21
  instance_count: 1
  location: nbg1
worker_node_pools:
- name: small-static
  instance_type: cpx21
  instance_count: 1
  location: hel1
- name: small-autoscaled
  instance_type: cpx21
  instance_count: 1
  location: fsn1
  autoscaling:
    enabled: true
    min_instances: 0
    max_instances: 1

embedded_registry_mirror:
  enabled: true

datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgresql://username:dbname@host:port/postgres
  • add ipv4 and ipv6 along with port any in your latest created k3s firewall inbound rules

Without any context whatsoever it's really hard to guess what this comment is all about. Care to elaborate?

@ajinkyajawale14499
Copy link

This worked for me...

hetzner_token: api_token
cluster_name: k3s
kubeconfig_path: "~/.kube/config"
k3s_version: v1.26.4+k3s1

public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"


ssh_allowed_networks:
  - 0.0.0.0/0
api_allowed_networks:
  - 0.0.0.0/0

additional_packages:
- haproxy
post_create_commands: ## Downloads an HAProxy configuration file for port 80/443
- curl -o /etc/haproxy/haproxy.cfg https://gist.githubusercontent.com/laszlocph/61642778fb61e3d7c1766d31e676c0f7/raw/bf836c73ce24b07b2423251316e6f030ada6b3c9/haproxy.cfg
- service haproxy reload

schedule_workloads_on_masters: false
masters_pool:
  instance_type: cpx21
  instance_count: 1
  location: nbg1
worker_node_pools:
- name: small-static
  instance_type: cpx21
  instance_count: 1
  location: hel1
- name: small-autoscaled
  instance_type: cpx21
  instance_count: 1
  location: fsn1
  autoscaling:
    enabled: true
    min_instances: 0
    max_instances: 1

embedded_registry_mirror:
  enabled: true

datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgresql://username:password@host:port/postgres

don't forget to add ipv4 and ipv6 under rules along with port any for your latest created k3s firewall

@vitobotta
Copy link
Owner Author

This worked for me...

hetzner_token: api_token
cluster_name: k3s
kubeconfig_path: "~/.kube/config"
k3s_version: v1.26.4+k3s1

public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"


ssh_allowed_networks:
  - 0.0.0.0/0
api_allowed_networks:
  - 0.0.0.0/0

additional_packages:
- haproxy
post_create_commands: ## Downloads an HAProxy configuration file for port 80/443
- curl -o /etc/haproxy/haproxy.cfg https://gist.githubusercontent.com/laszlocph/61642778fb61e3d7c1766d31e676c0f7/raw/bf836c73ce24b07b2423251316e6f030ada6b3c9/haproxy.cfg
- service haproxy reload

schedule_workloads_on_masters: false
masters_pool:
  instance_type: cpx21
  instance_count: 1
  location: nbg1
worker_node_pools:
- name: small-static
  instance_type: cpx21
  instance_count: 1
  location: hel1
- name: small-autoscaled
  instance_type: cpx21
  instance_count: 1
  location: fsn1
  autoscaling:
    enabled: true
    min_instances: 0
    max_instances: 1

embedded_registry_mirror:
  enabled: true

datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgresql://username:password@host:port/postgres

don't forget to add ipv4 and ipv6 under rules along with port any for your latest created k3s firewall

This is a PR about the upcoming v2 of the tool, please see the discussion at #385 for details on some changes that you need to make to the configuration to make it compatible with the new version. Also please open an issue (rather than commenting in this PR) if you run into a problem with the new version. Thanks!

Gunther Klessinger and others added 20 commits August 7, 2024 14:14
docs say # api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer.

And indeed, the user can't do this BEFORE running the script, not knowing the IP of the API LB created.

But since we set that hostname into the kubeconfig, when we do save_kubeconfig(master_count) and in the next command do kubectl cluster-info, this can't work - since at that time the DNS is not configured to point to that api loadbalancer.

My suggested fix is to set the IP of the LB into the kubeconfig. Then all will work and the user can, at his pace, configure his DNS to point to the api lb for that hostname - and only then adapt the kubeconfig, if wanted. SSL will work since we configure tls-sans for that hostname anyway.
But kubeconfig, we can't do this for the user, having no control over if and when he configures his DNS.

`
Since Ubuntu 22.10, SSH uses socket activation. This means that the
SSH server isn't running by default, and gets started automatically
the first time there is a connection on port 22.

Upside: it saves 3 MB of RAM.

Downside: if you're customizing the SSH port number with a drop-in
configuration file at cloud-init time, it breaks SSH. SSH then needs
to be restarted (or the machine needs to be rebooted) for the port
number to be picked up by the systemd generator. The net result for
hetzner-k3s is that if we use a non-default SSH port, provisioning
breaks.

This manifests itself by the following log message:

  systemd[1]: ssh.socket: Socket unit configuration has changed while
  unit has been running, no open socket file descriptor left. The
  socket unit is not functional until restarted.

One possible fix is to disable socket activation and revert to the
default mode (start SSH at boot). This can be done by disabling
ssh.socket and enabling ssh.service.

This patch does exactly that, adding the corresponding commands to
the cloud-init template. This has been tested with Ubuntu 24.04
and 22.04 as well.

The following link has more details on the Ubuntu change:

https://discourse.ubuntu.com/t/sshd-now-uses-socket-based-activation-ubuntu-22-10-and-later/30189/14

This patch also adds a test harness to automatically create a
bunch of clusters with different configurations and verify that
they work correctly. It was helpful to confirm that this patch
worked correctly with all the distros available on Hetzner; or
rather, that the only ones that didn't work (alma-8, centos-8...)
weren't working in the first place anyway.
fix: solve timeout when api server hostname is given
Copy link

sonarcloud bot commented Aug 18, 2024

@vitobotta vitobotta merged commit 97cb521 into main Aug 18, 2024
5 checks passed
@vitobotta vitobotta deleted the more-refactoring branch August 18, 2024 19:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants