Simple opinionated high availability Kubernetes cluster deployed in libvirt.
APIServer connections can be made highly available for kube-controller-manager and scheduler as well and can be toggled with the kcm_scheduler_with_ha_apiserver_connection
variable in vars.yaml
(WARNING: setting to true can cause problems with upgrades).
- Install ansible requirements:
ansible-galaxy collection install -r requirements.yml
- Copy the
hosts.example
file into thehosts
file. - Create
kubeha
network withansible-playbook base/network-init.yml
- Prepare the base VM
- Install fedora or fedora rawhide and name it fedora-base. Select
kubeha
network as the source of the VM's NIC. - Input the base VM name and IP address into
hosts
file - Start the base VM and create ssh keys:
ansible-playbook base/base-vm-start.yml
- Enable sshd and permit root ssh access to the VM.
- Copy the public key into the VM:
ssh-copy-id -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null" -i auth/id_rsa root@${BASE_VM_IP}
- Prepare the base VM and turn it off:
ansible-playbook base/base-vm-prepare.yml
- Install fedora or fedora rawhide and name it fedora-base. Select
- Clone fedora-base into as many masters and workers as desired via
virt-manager
. - Start all the VMs to obtain generated IP addresses.
- Insert the VM names and IP addresses into the
hosts
file. - Optionally regenerate ssh host keys in all the VMs:
rm /etc/ssh/ssh_host_* && ssh-keygen -A && systemctl restart sshd
- Inspect vars.yaml for any customization.
- Install the DNS servers:
ansible-playbook install-dns.yml
- Install the cluster:
ansible-playbook install-cluster.yml
- Either add
dns
group IPs from the./hosts
file as your DNS server, or add the following entry to your hosts file:echo '192.168.150.2 api-kube.kubeha.knet' >> /etc/hosts
- Use ./lifecycle and ./cluster scripts for management of the cluster.
Run ansible-playbook cluster/upgrade-to-latest.yml
to upgrade the cluster, the system and its packages to the latest version.
This option will only upgrade or downgrade the kubernetes packages, not the whole system. There is no guarantee that this will work.
- Set a
k8s_version
variable invars.yaml
to the desired Kubernetes version. - Run
ansible-playbook cluster/upgrade-downgrade-to-version.yml
to upgrade or downgrade the cluster. - The
force_upgrade_downgrade
variable can be set totrue
invars.yaml
if you encounter errors (e.g. when downgrading).
Upgrade versions and validations can be checked by sshing into a master node and running kubeadm upgrade plan
.