Create your own personal Kubernetes infrastructure, the quick and easy way
## get this repository here
git clone https://github.com/JamesClonk/k8s-infrastructure
cd k8s-infrastructure
# adjust main configuration file:
vi configuration.yml
# adjust secrets file:
sops secrets.sops
# provision Kubernetes on Hetzner Cloud with CSI driver for persistent volumes
# and install these basic tools and software:
# ingress-nginx, cert-manager, dashboard, prometheus, loki, grafana, postgres
./deploy.sh
# configure DNS provider/entries, with loadbalancer-, floating- or server-ip:
# A $INGRESS_DOMAIN $IP
# CNAME *.$INGRESS_DOMAIN $INGRESS_DOMAIN
This is a collection of scripts for a fully automated deployment of Kubernetes onto a Hetzner Cloud virtual machine. It will use the Hetzner Cloud CLI to create a single VM, deploy K3s onto it, target the newly installed Kubernetes and deploy various additional components. The whole deployment process is entirely automated and idempotent, and can also run automatically via the included .github/workflows
.
Installation? There's nothing to install here. Just run the steps as mentioned above in "Quick start" and off you go, your very own personal Kubernetes cluster will be deployed on Hetzner Cloud. 🥳
The provided default configuration inside configuration.yml
is aimed at provisioning and using a type CPX41 or higher Hetzner Cloud virtual machine, with at least 8 CPUs and16GB of memory.
You will have to modify configuration.yml
and sops secrets.sops
before you can provision your own Kubernetes cluster on Hetzner Cloud.
For example a CX31 costs ~10€ per month and is billed hourly, which makes it a very cheap and super convenient option for testing purposes.
If you want to use a lower spec machine then you should also adjust resource values for some of the included components, mainly to reduce their memory footprint.
To do so simply go through each subdirectory and check their respective values.yml
, if it contains a __.resources.__
section you can adjust the values there.
Adjust postgres.resources.memory_in_mb
to 256
for a minimal database sizing. You can disable the periodic backups by setting pgbackup.enabled
to false
, as each backup job can consume up to 1GB of memory while it is running. You can also configure the backup jobs maximum memory consumption via pgbackup.resources.memory_in_mb
, though decreasing this value too much will cause the backup to fail and crash if it runs out of memory while creating a database dump.
Adjust prometheus.resources.requests|limits
to lower values to reduce maximum memory usage. Be careful not to set the limits too low, prometheus-server can crash easily due to running out-of-memory while running some heavy metrics queries against it.
Name | Description | URL |
---|---|---|
K3s | An easy to install, lightweight, fully compliant Kubernetes distribution packaged as a single binary | https://github.com/rancher/k3s |
NGINX Ingress Controller | An Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer | https://github.com/kubernetes/ingress-nginx |
oauth2-proxy | A proxy that provides authentication with Google, Azure, OpenID Connect and many more identity providers | https://github.com/oauth2-proxy/oauth2-proxy |
cert-manager | Automatic certificate management on top of Kubernetes, using Let's Encrypt | https://github.com/jetstack/cert-manager |
Kubernetes Dashboard | General-purpose web UI for Kubernetes clusters | https://github.com/kubernetes/dashboard |
kube-state-metrics | Add-on agent to generate and expose cluster-level metrics | https://github.com/kubernetes/kube-state-metrics |
Prometheus | Monitoring & alerting system, and time series database for metrics | https://github.com/prometheus |
Loki | A horizontally-scalable, highly-available, multi-tenant log aggregation system | https://github.com/grafana/loki |
Grafana | Monitoring and metric analytics & dashboards for Prometheus and Loki | https://github.com/grafana/grafana |
PostgreSQL | The world's most advanced open source relational database | https://www.postgresql.org/docs |
Name | Description | URL |
---|---|---|
Hetzner Cloud | Command-line interface for interacting with Hetzner Cloud | https://github.com/hetznercloud/cli |
Mozilla SOPS | Encrypt files with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP | https://github.com/mozilla/sops |
kapp | Deploy and view groups of Kubernetes resources as applications | Carvel (formerly https://k14s.io) |
ytt | Template and overlay Kubernetes configuration via YAML structures | Carvel (formerly https://k14s.io) |
vendir | Declaratively state what files should be in a directory | Carvel (formerly https://k14s.io) |
kbld | Seamlessly incorporates image building, pushing, and resolution into deployment workflows | Carvel (formerly https://k14s.io) |
kapp-controller | Kubernetes controller for Kapp, provides App CRDs | Carvel (formerly https://k14s.io) |
k9s | Terminal UI to interact with your Kubernetes clusters | https://github.com/derailed/k9s |
Well, this is meant to be used for a single-user Kubernetes cluster, whether with only a one or multiple nodes, self-deployed or managed. While operators are certainly cool pieces of software they don't really make much sense for a single-user scenario, hence I saw no reason to use the prometheus, grafana and postgres operators for those parts of this Kubernetes-infrastructure-as-code project.
I was considering and experimenting with using oauth2-proxy and authelia, but ultimately made the same decision as with regards to using operators. It simply doesn't make much sense for a single-user Kubernetes cluster, the engineering and operational overhead was not worth it. All I needed are static username+password credentials for securing my applications.
My recommendation would be to use one of these two if you have more requirements than me:
https://github.com/oauth2-proxy/oauth2-proxy (Simple oauth2 proxy to be used with GitHub for example)https://github.com/authelia/authelia (Allows sophisticated auth configuration, 2FA, etc.)
Both can be configured easily to work well together with ingress-nginx.
The above is not true anymore, because I am now actually using oauth2-proxy together with GitHub for all ingresses 😂