Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(on-premises): node labels and annotations #320

Merged
merged 5 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
120 changes: 120 additions & 0 deletions docs/releases/unreleased.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Kubernetes Fury Distribution Release vTBD

Welcome to KFD release `vTBD`.

The distribution is maintained with ❤️ by the team [SIGHUP](https://sighup.io/).

## New Features since `v1.30.0`

### Installer Updates

- [on-premises](https://github.com/sighupio/fury-kubernetes-on-premises) 📦 installer: [**vTBD**](https://github.com/sighupio/fury-kubernetes-on-premises/releases/tag/vTBD)
- TBD
- [eks](https://github.com/sighupio/fury-eks-installer) 📦 installer: [**vTBD**](https://github.com/sighupio/fury-eks-installer/releases/tag/vTBD)
- TBD

### Module updates

- [networking](https://github.com/sighupio/fury-kubernetes-networking) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-networking/releases/tag/vTBD)
- TBD
- [monitoring](https://github.com/sighupio/fury-kubernetes-monitoring) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-monitoring/releases/tag/vTBD)
- TBD
- [logging](https://github.com/sighupio/fury-kubernetes-logging) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-logging/releases/tag/vTBD)
- TBD
- [ingress](https://github.com/sighupio/fury-kubernetes-ingress) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-ingress/releases/tag/vTBD)
- TBD
- [auth](https://github.com/sighupio/fury-kubernetes-auth) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-auth/releases/tag/vTBD)
- TBD
- [dr](https://github.com/sighupio/fury-kubernetes-dr) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-dr/releases/tag/vTBD)
- TBD
- [tracing](https://github.com/sighupio/fury-kubernetes-tracing) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-tracing/releases/tag/vTBD)
- TBD
- [opa](https://github.com/sighupio/fury-kubernetes-opa) 📦 core module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-opa/releases/tag/vTBD)
- TBD
- [aws](https://github.com/sighupio/fury-kubernetes-aws) 📦 module: [**vTBD**](https://github.com/sighupio/fury-kubernetes-aws/releases/tag/vTBD)
- TBD

## Breaking changes 💔

- **TBD**: TBD

## New features 🌟

- [[#320](https://github.com/sighupio/fury-distribution/pull/320)] **Custom Lables and Annotations for on-premises nodes**: the configuration file for on-premises clusters now supports specifying custom labels and annotations for the control-plane nodes and for the node groups. The labels and annotations specified will be applied to all the nodes in the group (and deleted when removed from the configuration). Usage example:

```yaml
...
spec:
kubernetes:
masters:
hosts:
- name: master1
ip: 192.168.66.29
- name: master2
ip: 192.168.66.30
- name: master3
ip: 192.168.66.31
labels:
node-role.kubernetes.io/dungeon-master: ""
dnd-enabled: "true"
annotations:
level: "100"
nodes:
- name: infra
hosts:
- name: infra1
ip: 192.168.66.32
- name: infra2
ip: 192.168.66.33
- name: infra3
ip: 192.168.66.34
taints:
- effect: NoSchedule
key: node.kubernetes.io/role
value: infra
labels:
a-label: with-content
empty-label: ""
label/sighup: "with-slashes"
node-role.kubernetes.io/wizard: ""
dnd-enabled: "true"
annotations:
with-spaces: "annotation with spaces"
without-spaces: annotation-without-spaces
level: "20"
- name: worker
hosts:
- name: worker1
ip: 192.168.66.35
taints: []
labels:
node-role.kubernetes.io/barbarian: ""
dnd-enabled: "true"
label-custom: "with-value"
annotations:
level: "10"
- name: empty-labels-and-annotations
hosts:
- name: empty1
ip: 192.168.66.50
taints: []
labels:
annotations:
- name: undefined-labels-and-annotations
hosts:
- name: undefined1
ip: 192.168.66.51
taints: []
...
```

## Fixes 🐞

- TBD
<!-- Example:
- [[#264](https://github.com/sighupio/fury-distribution/pull/264)] Hubble UI: now is shown in the right group in the Directory
-->

## Upgrade procedure

Check the [upgrade docs](https://docs.kubernetesfury.com/docs/upgrades/upgrades) for the detailed procedure.
40 changes: 39 additions & 1 deletion schemas/public/onpremises-kfd-v1alpha2.json
Original file line number Diff line number Diff line change
Expand Up @@ -298,6 +298,14 @@
"items": {
"$ref": "#/$defs/Spec.Kubernetes.Masters.Host"
}
},
"labels": {
"description": "Optional additional Kubernetes labels that will be added to the control-plane nodes. Follows Kubernetes labels format.\n\nNote: **Existing labels with the same key will be overwritten** and the label setting the `control-plane` role cannot be deleted.",
"$ref": "#/$defs/Types.KubeLabels"
},
"annotations": {
"description": "Optional additional Kubernetes annotations that will be added to the control-plane nodes. Follows Kubernetes annotations format. **Existing annotations with the same key will be overwritten**.",
"$ref": "#/$defs/Types.KubeAnnotations"
}
},
"required": [
Expand Down Expand Up @@ -353,6 +361,14 @@
"items": {
"$ref": "#/$defs/Types.KubeTaints"
}
},
"labels": {
"description": "Optional additional Kubernetes labels that will be added to the nodes in this node group. Follows Kubernetes labels format.\n\nNote: **Existing labels with the same key will be overwritten** and the label setting the node role to the node group name cannot be deleted.",
"$ref": "#/$defs/Types.KubeLabels"
},
"annotations": {
"description": "Optional additional Kubernetes annotations that will be added to the nodes in this node group. Follows Kubernetes annotations format. **Existing annotations with the same key will be overwritten**.",
"$ref": "#/$defs/Types.KubeAnnotations"
}
},
"required": [
Expand Down Expand Up @@ -2263,7 +2279,29 @@
"pattern": "^(http|https)\\:\\/\\/.+$"
},
"Types.KubeLabels": {
"type": "object",
"type": [
"object",
"null"
],
"propertyNames": {
"pattern": "^([a-zA-Z0-9][a-zA-Z0-9-.]*[a-zA-Z0-9]/)?([a-zA-Z0-9][-a-zA-Z0-9_.]*)?[a-zA-Z0-9]$",
"maxLength": 253
},
"additionalProperties": {
"type": "string",
"pattern": "^(([a-zA-Z0-9][-a-zA-Z0-9_.]*)?[a-zA-Z0-9])?$",
"maxLength": 63
}
},
"Types.KubeAnnotations": {
"type": [
"object",
"null"
],
"propertyNames": {
"pattern": "^([a-zA-Z0-9][a-zA-Z0-9-.]*[a-zA-Z0-9]/)?([a-zA-Z0-9][-a-zA-Z0-9_.]*)?[a-zA-Z0-9]$",
"maxLength": 253
},
"additionalProperties": {
"type": "string"
}
Expand Down
94 changes: 81 additions & 13 deletions templates/kubernetes/onpremises/create-playbook.yaml.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -122,19 +122,87 @@
tags:
- kube-worker

# We set the node's role to the node group's name in furyctl.yaml
# We set a custom label with the role as part of the kubeadm boostrap of the
# node via a kubelet flag.
# The command below sets also the standard label that kubectl uses to show
# the node role when you do `kubectl get nodes`. This label cannot be set via
# the kubelet flag for security reasons.
- name: Label nodes with role
- name: Gather needed information for updating control-plane and nodes labels and annotations
hosts: nodes, master
tasks:
# TODO: furyctl has already checked for this secret, maybe we can pass it on from furyctl to the template engine
# somehow instead of downloading it again.
- name: Get previous cluster configuration
delegate_to: localhost
ansible.builtin.command: "{{ .paths.kubectl }} get secrets -n kube-system furyctl-config -o jsonpath='{.data.config}'"
register: previous_state
# We ignore the secret not found error because when we init the cluster the secret does not exist yet, so the command fails.
# Notice that all conditions must be true.
failed_when:
- previous_state.rc != 0
- '"Error from server (NotFound): secrets \"furyctl-config\" not found" not in previous_state.stderr'
# This is common for all the nodes, just run it once.
run_once: true

- name: Deserialize previous cluster configuration into a variable
delegate_to: localhost
ansible.builtin.set_fact:
furyctl_yaml: "{{ "{{ previous_state.stdout | b64decode | from_yaml }}" }}"
when: previous_state.rc == 0 and previous_state.stdout != "null"
# This is common for all the nodes, just run it once.
run_once: true

- name: Preparing control-plane labels and annotations
hosts: master
tasks:
- name: Format control-plane labels and annotations for usage with kubectl
vars:
# We calculate the removed labels and annotations so we can pass them as parameters with the appended `-` to delete them from the nodes.
# If they were not defined in the previous configuration we default to an empty list for calculating the difference.
# We use the `reject` filter to remove the role label in case it was added manually, so it does not get removed.
removed_cp_labels: "{{ "{{ furyctl_yaml.spec.kubernetes.masters.labels | default([], true) | difference(kubernetes_node_labels|default([], true)) | reject('match', 'node-role.kubernetes.io/control-plane') }}" }}"
removed_cp_annotations: "{{ "{{ furyctl_yaml.spec.kubernetes.masters.annotations | default([], true) | difference(kubernetes_node_annotations|default([], true)) }}" }}"
ansible.builtin.set_fact:
# We apply all the labels defined in the new configuration and delete the removed ones. We don't care if the rest are new or existed before, we just overwrite.
node_labels: "{{ "{% for l in kubernetes_node_labels|default([], true) %}{{l}}={{kubernetes_node_labels[l]}} {% endfor %}{% for rl in removed_cp_labels %}{{rl}}- {% endfor %}" }}"
node_annotations: "{{ "{% for a in kubernetes_node_annotations|default([], true) %} {{a}}={{kubernetes_node_annotations[a]|quote}} {% endfor %} {% for ra in removed_cp_annotations %}{{ra}}- {% endfor %}" }}"
# We run this once because labels are common for all the control plane hosts.
run_once: true

- name: Preparing nodes labels and annotations
hosts: nodes
tasks:
- name: Get node's name and role
set_fact:
node_name: "{{ print "{{ kubernetes_hostname }}" }}"
node_role: "{{ print "{{ kubernetes_role }}" }}"
- name: Label node
- name: Format nodes labels and annotations for usage with kubectl
vars:
# For the nodes we can't access directly the labels and annotations properties like we do for the masters
# because nodes is a list of node groups.
# We need to identify which element of the `nodes` list is the right one for this node.
node_group_details: "{{ "{{ furyctl_yaml.spec.kubernetes.nodes | selectattr('name', '==', kubernetes_role) | first }}" }}"
# We calculate the removed labels and annotations accessing the element of the `nodes` property we got in the previous line.
# We use the `reject` filter to remove the role label in case it was added manually, so it does not get removed.
node_role_label: {{ "node-role.kubernetes.io/{{ kubernetes_role }}" }}
removed_node_labels: "{{ "{{ node_group_details.labels|default([], true) | difference(kubernetes_node_labels|default([], true)) | reject('match', node_role_label) }}" }}"
removed_node_annotations: "{{ "{{ node_group_details.annotations|default([], true) | difference(kubernetes_node_annotations|default([], true)) }}" }}"
ansible.builtin.set_fact:
node_labels: "{{ "{% for l in kubernetes_node_labels|default([], true) %}{{l}}={{kubernetes_node_labels[l]}} {% endfor %}{% for rl in removed_node_labels %}{{rl}}- {% endfor %}" }}"
node_annotations: "{{ "{% for a in kubernetes_node_annotations|default([], true) %} {{a}}={{kubernetes_node_annotations[a]|quote}} {% endfor %} {% for ra in removed_node_annotations %}{{ra}}- {% endfor %}" }}"

- name: Update control-plane and nodes labels and annotations
hosts: nodes, master
tasks:

# We set here the label that determines the node role because the kubelet can't do it for security reasons.
# We set by default the role to be the name of the node group in the furyctl.yaml file.
# We do this only for the regular nodes. We don't need to do this for the control plane because kubeadm configures
# the kubelet to do it automatically.
- name: Set nodes role based on the node group's name
delegate_to: localhost
ansible.builtin.command: "{{ .paths.kubectl }} {{ "label node {{ kubernetes_hostname }} node-role.kubernetes.io/{{ kubernetes_role }}= --kubeconfig={{ kubernetes_kubeconfig_path }}admin.conf" }}"
when: kubernetes_role is defined

# Update the control plane and nodes labels with what we calculated before only if needed.
- name: Update node labels
delegate_to: localhost
ansible.builtin.command: "{{ .paths.kubectl }} {{ "label node {{ kubernetes_hostname }} {{ node_labels }} --overwrite --kubeconfig={{ kubernetes_kubeconfig_path }}admin.conf" }}"
when: node_labels is defined and node_labels|trim != ''

# Update the control plane and nodes annotations with what we calculated before only if needed.
- name: Update node annotations
delegate_to: localhost
shell: "{{ .paths.kubectl }} {{ print "label node {{ node_name }} node-role.kubernetes.io/{{ node_role }}= --kubeconfig={{ kubernetes_kubeconfig_path }}admin.conf" }}"
ansible.builtin.command: "{{ .paths.kubectl }} {{ "annotate node {{ kubernetes_hostname }} {{ node_annotations }} --overwrite --kubeconfig={{ kubernetes_kubeconfig_path }}admin.conf" }}"
when: node_annotations is defined and node_annotations|trim != ''
16 changes: 16 additions & 0 deletions templates/kubernetes/onpremises/hosts.yaml.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,14 @@ all:
ansible_host: "{{ $h.ip }}"
kubernetes_apiserver_advertise_address: "{{ $h.ip }}"
kubernetes_hostname: "{{ $h.name }}.{{ $dnsZone }}"
{{- if index $.spec.kubernetes.masters "labels" }}
kubernetes_node_labels:
{{ $.spec.kubernetes.masters.labels | toYaml | indent 12 | trim }}
{{- end }}
{{- if index $.spec.kubernetes.masters "annotations" }}
kubernetes_node_annotations:
{{ $.spec.kubernetes.masters.annotations | toYaml | indent 12 | trim }}
{{- end }}
{{- end }}
vars:
dns_zone: "{{ $dnsZone }}"
Expand Down Expand Up @@ -102,6 +110,14 @@ all:
kubernetes_taints:
{{ $n.taints | toYaml | indent 14 | trim }}
{{- end }}
{{- if index $n "labels" }}
kubernetes_node_labels:
{{ $n.labels | toYaml | indent 14 | trim }}
{{- end -}}
{{- if index $n "annotations" }}
kubernetes_node_annotations:
{{ $n.annotations | toYaml | indent 14 | trim }}
{{- end -}}
{{- end }}
ungrouped: {}
vars:
Expand Down