This repository contains a Terraform module for Vultr that provides a minimal infra base for kubespray to provision a Kubernetes cluster.
Before starting, be sure to create a Vultr API key and export it to your shell session:
$ export VULTR_API_KEY=*api_key*
Also, make sure there is at least one SSH key created on Vultr prior to infra provisioning.
If you'd like to see what's on the menu for available regions, instance types, and beyond, look no further:
curl "https://api.vultr.com/v2/regions" \
-X GET \
-H "Authorization: Bearer ${VULTR_API_KEY}"
curl "https://api.vultr.com/v2/plans" \
-X GET \
-H "Authorization: Bearer ${VULTR_API_KEY}"
curl "https://api.vultr.com/v2/os" \
-X GET \
-H "Authorization: Bearer ${VULTR_API_KEY}"
After that, the workflow is pretty simple:
- Configure
terraform.tfvars
to your heart's content, available variables are described below - Fetch Terraform provider plugins and initialize state -
terraform init
- Apply the configuration -
terraform apply
If you'd like to apply a specific change of the IaC without affecting other resources, use targeting:
$ terraform apply -target *e.g. vultr_instance.masters*
After the infra is provisioned, an Ansible inventory file would be rendered and available at inventory.ini
This file can be used as is for the kubespray playbook or customized prior to that.
Pro tip: Make sure to disable the firewall when using iptables-based firewall rules in kubespray
To disable the default firewall(e.g. ufw
), you can simply run:
$ ansible -i inventory.ini -m command -a "ufw disable" -u root all
Then, you can copy inventory.ini
to an existing Ansible workspace, or create a new one by copying the sample inventory:
$ git clone https://github.com/kubernetes-sigs/kubespray -b v2.23.1
$ cp -R kubespray/inventory/sample *ansible_workspace_path*/inventory
$ cp inventory.ini *ansible_workspace_path*/inventory
Finally, after setting all the desired values in Kubespray config files, initiate the cluster setup:
$ ansible-playbook -i *ansible_workspace_path*/inventory.ini --user=root --become --become-user=root cluster.yml -v
To connect to the cluster, simply SSH to any master node, copy /etc/kubernets/admin.conf
to your machine, and run kubectl get nodes
Name | Version |
---|---|
vultr | 2.17.1 |
Name | Version |
---|---|
local | 2.4.0 |
random | 3.5.1 |
vultr | 2.17.1 |
Name | Type |
---|---|
local_file.inventory | resource |
random_id.main | resource |
vultr_instance.masters | resource |
vultr_instance.workers | resource |
vultr_vpc2.main | resource |
vultr_os.main | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
cluster_pool_sizes | Configuration for master and worker pool sizes for the K8s cluster | map(number) |
{ |
no |
etcd_nodes | List of nodes to deploy etcd to; used during Ansible inventory rendering | list(string) |
[ |
no |
instance_activation_email | n/a | bool |
false |
no |
instance_backup_schedule | n/a | string |
"daily" |
no |
instance_backup_state | n/a | string |
"disabled" |
no |
instance_ddos_protection | n/a | bool |
false |
no |
instance_enable_ipv6 | n/a | bool |
false |
no |
instance_os_id | OS ID to use for the instances | number |
0 |
no |
instance_os_name | Name of the OS to use for cluster nodes | string |
"Debian 12 x64 (bookworm)" |
no |
instance_plan | Instance type and size to use | string |
"vc2-4c-8gb" |
no |
name_prefix | n/a | string |
"compute" |
no |
region | n/a | string |
"ewr" |
no |
ssh_key_ids | IDs of existing Vultr SSH keys | list(string) |
[ |
no |
vpc_ip_block | n/a | string |
"10.0.0.0" |
no |
vpc_ip_type | n/a | string |
"v4" |
no |
vpc_prefix_length | n/a | number |
24 |
no |
vultr_rate_limit | n/a | number |
100 |
no |
vultr_retry_limit | n/a | number |
3 |
no |
Name | Description |
---|---|
master_private_ips | n/a |
master_public_ips | n/a |
worker_private_ips | n/a |
worker_public_ips | n/a |