Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V8.0.0 #23

Merged
merged 10 commits into from
Mar 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
version: 2
updates:
# Maintain dependencies for GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
4 changes: 1 addition & 3 deletions .github/workflows/terraform.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.2.4
terraform_version: 1.5.5
- run: |
terraform fmt -check -recursive ./modules
- name: Terraform Init
Expand Down
7 changes: 7 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
fmt-all:
terraform fmt -recursive modules/nodes/*.tf
terraform fmt -recursive modules/controllers/*.tf
terraform fmt -recursive example/*.tf

checks: fmt-all # 59
tfsec .
46 changes: 24 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
# Terraform module for kubernetes on AWS

[![Actions Status](https://github.com/jecnua/terraform-aws-kubernetes/workflows/Tests/badge.svg)](https://github.com/jecnua/terraform-aws-kubernetes/actions)
![https://www.terraform.io/](https://img.shields.io/badge/terraform-v1.2.x-blue.svg?style=flat)
![https://github.com/opentffoundation/manifesto](https://img.shields.io/badge/OpenTF-1.6.0-blue.svg?style=flat)
![https://www.terraform.io/](https://img.shields.io/badge/terraform-<=v1.5.5-red.svg?style=flat)
[![License: MIT](https://img.shields.io/badge/license-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
![](https://img.shields.io/maintenance/yes/2022.svg)
[![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/jecnua/terraform-aws-kubernetes.svg)](http://isitmaintained.com/project/jecnua/terraform-aws-kubernetes "Average time to resolve an issue")
[![Percentage of issues still open](http://isitmaintained.com/badge/open/jecnua/terraform-aws-kubernetes.svg)](http://isitmaintained.com/project/jecnua/terraform-aws-kubernetes "Percentage of issues still open")
![](https://img.shields.io/maintenance/yes/2023.svg)

# Disclaimer - OpenTF support

- [https://github.com/opentffoundation/manifesto](https://github.com/opentffoundation/manifesto)

I support OpenTF. As soon as the first version of OpenTF is available this repo will switch to it and
any "direct" support of terraform will be dropped. I will tag the last commit tested on 1.5.5 for people
that wants to use terraform or fork from there. Realistically the fork will not diverge immediately anyway.

# Module

This repository contains a set of modules that will allow you to install a kubernetes cluster in your own AWS environment.
No other cloud provider is supported.
Expand All @@ -27,31 +36,24 @@ More information on each module can be found at the following links:

[Module maintainers](MAINTAINERS.md)

## Supported terraform versions

*NOTE*: It only supports Terraform 1.2.x onward

For older Terraform version please use:
## Terraform

- For 0.11 the tag _v0.11.x-last-supported-code_
- For 0.12 the tag _v0.12.x-last-supported-code_
- For 0.13 the tag _v0.13.x-last-supported-code_
- For 0.14 the tag _v0.14.x-last-supported-code_
### Supported terraform versions

*DISCLAIMER*: The code on these branches is not updated.
This module will only support up to terraform 1.5.5 (due to the change of licence).

## Tests
### Providers

Unfortunately for now is tested manually. I do however test it weekly :)
Last tested with:
Unfortunately for now is tested manually. Last tested with:

```
$ terraform version
Terraform v1.2.6
Terraform v1.5.5
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v4.23.0
+ provider registry.terraform.io/hashicorp/http v3.0.1
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.3.2
+ provider registry.terraform.io/hashicorp/aws v5.14.0
+ provider registry.terraform.io/hashicorp/external v2.3.1
+ provider registry.terraform.io/hashicorp/http v3.4.0
+ provider registry.terraform.io/hashicorp/null v3.2.1
+ provider registry.terraform.io/hashicorp/random v3.5.1
+ provider registry.terraform.io/hashicorp/template v2.2.0
```
16 changes: 12 additions & 4 deletions example/01-main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,12 @@ module "k8s_nodes_containerd" {
ec2_k8s_workers_instance_type = "m5a.large"
vpc_id = "vpc-xxx"
private_subnets = module.k8s.nodes_subnets_private_id
nodes_cri_bootstrap = module.containerd_cri.cri_bootstrap
nodes_config_bundle = module.k8s.nodes_config_bundle # You get you configurations from the master module
private_subnets_cidr = [
"x.x.x.x/25",
"x.x.x.x/25",
]
nodes_cri_bootstrap = module.containerd_cri.cri_bootstrap
nodes_config_bundle = module.k8s.nodes_config_bundle # You get you configurations from the master module
}

module "k8s_nodes_crio" {
Expand All @@ -54,8 +58,12 @@ module "k8s_nodes_crio" {
ec2_k8s_workers_instance_type = "m5a.large"
vpc_id = "vpc-xxx"
private_subnets = module.k8s.nodes_subnets_private_id
nodes_cri_bootstrap = module.crio_cri.cri_bootstrap
nodes_config_bundle = module.k8s.nodes_config_bundle # You get you configurations from the master module
private_subnets_cidr = [
"x.x.x.x/25",
"x.x.x.x/25",
]
nodes_cri_bootstrap = module.crio_cri.cri_bootstrap
nodes_config_bundle = module.k8s.nodes_config_bundle # You get you configurations from the master module
}

resource "aws_route" "private_subnets_route_traffic_to_NAT" {
Expand Down
24 changes: 20 additions & 4 deletions modules/controllers/00-variables_defaults.tf
Original file line number Diff line number Diff line change
Expand Up @@ -65,12 +65,10 @@ variable "userdata_pre_install" {
}

# By default will install calico as CNI but you can override it to use what you want
# Example of weave as alternative (remember to escape the "):
# su "$KCTL_USER" -c "kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
variable "userdata_cni_install" {
variable "cni_file_location" {
description = "User-data script that will be applied"
type = string
default = "su \"$KCTL_USER\" -c \"kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml\""
default = "https://docs.projectcalico.org/manifests/calico.yaml"
}

variable "userdata_post_install" {
Expand Down Expand Up @@ -115,6 +113,24 @@ variable "ebs_volume_type" {
default = "gp3"
}

variable "health_check_type" {
type = string
description = "The health check type"
default = "EC2"
}

variable "health_check_grace_period" {
type = string
description = "The health grace period"
default = "300"
}

variable "authorization_mode" {
type = string
description = "API server authorization modes: https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules"
default = "Node,RBAC"
}

//variable "market_options" {
// type = string
// description = "Market options for the instances"
Expand Down
2 changes: 1 addition & 1 deletion modules/controllers/02-locals.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
locals {
tags_as_map = merge( # TODO: Remove name
tags_as_map = merge(
{
"Environment" = format("%s", var.environment)
"k8s.io/role/master" = "1" # Taken from the kops # TODO: CHECK
Expand Down
12 changes: 12 additions & 0 deletions modules/controllers/03-sg.tf
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,15 @@ resource "aws_security_group_rule" "allow_all_lb" {
cidr_blocks = var.subnets_public_cidr_block
security_group_id = aws_security_group.k8s_controllers_node_sg.id
}

# https://kubernetes.io/docs/reference/networking/ports-and-protocols/
# When deploying metric server it may go on any worker node and it needs to speak to kubelet on any
# other node. We do not know when we create a node all the sg to add, so we allow all the internal subnets
resource "aws_security_group_rule" "allow_kubelet_port_from_internal_subnets" {
type = "ingress"
from_port = 10250
to_port = 10250
protocol = "TCP"
cidr_blocks = var.subnets_private_cidr_block
security_group_id = aws_security_group.k8s_controllers_node_sg.id
}
21 changes: 12 additions & 9 deletions modules/controllers/04-asg.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ data "template_file" "bootstrap_node_k8s_controllers" {
k8s_deb_package_version = var.k8s_deb_package_version
kubeadm_install_version = var.kubeadm_install_version
pre_install = var.userdata_pre_install
cni_install = var.userdata_cni_install
cni_file_location = var.cni_file_location
kubeadm_join_config = data.template_file.bootstrap_k8s_controllers_kubeadm_join_config.rendered
post_install = var.userdata_post_install
kubeadm_config = data.template_file.bootstrap_k8s_controllers_kubeadm_config.rendered
Expand All @@ -33,6 +33,7 @@ data "template_file" "bootstrap_k8s_controllers_kubeadm_config" {
controller_join_token = var.controller_join_token
enable_admission_plugins = var.enable_admission_plugins
load_balancer_dns = aws_lb.k8s_controllers_external_lb.dns_name # Sign with the NLB name
authorization_mode = var.authorization_mode
}
}

Expand Down Expand Up @@ -77,7 +78,7 @@ resource "aws_launch_template" "controller" {
content {
device_name = "/dev/sda1" # root
ebs {
delete_on_termination = lookup(block_device_mappings.value, "delete_on_termination", true) # cattle not pets
delete_on_termination = lookup(block_device_mappings.value, "delete_on_termination", true) # TODO: Fix this on the master and reattach it
volume_type = lookup(block_device_mappings.value, "volume_type", var.ebs_volume_type)
volume_size = lookup(block_device_mappings.value, "volume_size", var.ebs_root_volume_size)
encrypted = lookup(block_device_mappings.value, "encrypted", true)
Expand All @@ -88,6 +89,7 @@ resource "aws_launch_template" "controller" {
iam_instance_profile {
name = aws_iam_instance_profile.k8s_instance_profile.id
}
# TODO: Reimplement this for testing?
// instance_market_options {
// market_type = var.market_options
// spot_options {
Expand All @@ -110,15 +112,14 @@ resource "aws_network_interface" "fixed_private_ip" {
security_groups = [aws_security_group.k8s_controllers_node_sg.id]
}

# TODO: Use this var.k8s_controllers_num_nodes to cycle
resource "aws_autoscaling_group" "k8s_controllers_ag" {
count = var.k8s_controllers_num_nodes
name = "k8s-controller-${count.index}-${var.environment}-${var.kubernetes_cluster}-${random_string.seed.result}"
max_size = 1
min_size = 1
desired_capacity = 1
health_check_grace_period = 300
health_check_type = "EC2"
health_check_grace_period = var.health_check_grace_period
health_check_type = var.health_check_type
force_delete = false
metrics_granularity = "1Minute"
wait_for_capacity_timeout = "10m"
Expand All @@ -131,10 +132,6 @@ resource "aws_autoscaling_group" "k8s_controllers_ag" {
version = "$Latest"
}

# load_balancers = [
# aws_elb.k8s_controllers_internal_elb.name,
# ]

termination_policies = [
"OldestInstance",
]
Expand All @@ -160,6 +157,12 @@ resource "aws_autoscaling_group" "k8s_controllers_ag" {
propagate_at_launch = true
}

tag {
key = "Name"
value = format("k8s-controller-%s-%s-%s-%s", var.unique_identifier, var.environment, random_string.seed.result, count.index)
propagate_at_launch = true
}

dynamic "tag" {
for_each = local.tags_for_asg
content {
Expand Down
31 changes: 30 additions & 1 deletion modules/controllers/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,32 @@
# CHANGELOG

## 7.0.0
## 8.0.0

DO NOT USE 7.0.0. Use this version instead.

### Breaking changes

- The variable to pass the CNI to install as been renamed and now only requires the http location of the file to apply

### Features & Changes

- Now controller nodes are tagged with a unique 'Name' tag
- health_check_type and health_check_grace_period are now variable
- Port 10250 is now open on all nodes to the internal subnets CIDR to allow metric server to work
- Added kubectl alias and bash completition just not to have to do it every time :D
- authorization-mode option for api server can now be modifies (in case you need to add Webhook)

### Bugfixes

- Fixed the auth issue for anonymous connection (nodes while they register)
- Fixed a race condition in case the master cannot speak to... itself, while installing the CNI

### Known bugs/issues

## 7.0.0 (DO NOT USE THIS VERSION)

I left in a temporary workaround to make the node register but it give too much power to anonymous.
Use the next version in which the correct fix is implemented.

### Breaking changes

Expand All @@ -19,6 +45,9 @@

### Known bugs/issues

- I left in a temporary workaround to make the node register but it give too much power to anonymous. Use the next version in which the correct fix is implemented.
- Controller nodes are not able to re-join the cluster if they die.

## 6.0.0

### Breaking changes
Expand Down
32 changes: 2 additions & 30 deletions modules/controllers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,6 @@

This module will create a new kubernetes cluster inside your VPC.

Support:

k8s 1.12.x NO
k8s 1.13.12 YES
k8s 1.14.x ?
k8s 1.15.x YES
k8s 1.16.x YES
k8s 1.17.x YES
k8s 1.18.8 YES
k8s 1.19.4 YES
k8s 1.20.1 YES
k8s 1.21.0 YES

## Usage

- [Utilities](../../examples/)
Expand All @@ -31,8 +18,8 @@ Be careful to pass the right subnets in availability_zone!

You can choose what version of k8s to install passing this variables:

k8s_deb_package_version = "1.19.4"
kubeadm_install_version = "stable-1.19"
k8s_deb_package_version = "1.27.5"
kubeadm_install_version = "stable-1.27"

## Debug

Expand Down Expand Up @@ -221,18 +208,3 @@ No modules.
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

# TODO

- Possible move to use parts (template_cloudinit_config)
- check kubelet extra args if it's not deprecated
- Use the loadbalancer to register to the masters
- Use datasource instead of heredoc
- Change ebs partition
- Fix CA verification
- Make KCTL_USER parametric
- FIX the bash
- FIX internal_network_cidr
- Add tags on resources with path to the module they are defined it
- Health check on the asg is done via ELB (check for using ALB)
- Export the information needed to create a target group outside the module
- Fix/reduce IAM roles power
- Access logs for lbs
Loading