Skip to content

Commit

Permalink
Merge pull request #79 from sighupio/feat/update-calico-add-compatibi…
Browse files Browse the repository at this point in the history
…lity-to-1.29

Feat: update calico add compatibility to 1.29, release v1.17.0
  • Loading branch information
nutellinoit authored Apr 22, 2024
2 parents c1c9200 + d252aca commit b626bc6
Show file tree
Hide file tree
Showing 11 changed files with 749 additions and 36 deletions.
521 changes: 514 additions & 7 deletions .drone.yml

Large diffs are not rendered by default.

9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
</h1>
<!-- markdownlint-enable MD033 -->

![Release](https://img.shields.io/badge/Latest%20Release-v1.15.2-blue)
![Release](https://img.shields.io/badge/Latest%20Release-v1.17.0-blue)
![License](https://img.shields.io/github/license/sighupio/fury-kubernetes-networking?label=License)
![Slack](https://img.shields.io/badge/slack-@kubernetes/fury-yellow.svg?logo=slack&label=Slack)

Expand All @@ -29,9 +29,9 @@ Kubernetes Fury Networking provides the following packages:

| Package | Version | Description |
| -------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| [calico](katalog/calico) | `3.27.0` | [Calico][calico-page] CNI Plugin. For cluster with `< 50` nodes. |
| [calico](katalog/calico) | `3.27.3` | [Calico][calico-page] CNI Plugin. For cluster with `< 50` nodes. |
| [cilium](katalog/cilium) | `1.15.2` | [Cilium][cilium-page] CNI Plugin. For cluster with `< 200` nodes. |
| [tigera](katalog/tigera) | `1.32.3` | [Tigera Operator][tigera-page], a Kubernetes Operator for Calico, provides pre-configured installations for on-prem and for EKS in policy-only mode. |
| [tigera](katalog/tigera) | `1.32.7` | [Tigera Operator][tigera-page], a Kubernetes Operator for Calico, provides pre-configured installations for on-prem and for EKS in policy-only mode. |
| [ip-masq](katalog/ip-masq) | `2.8.0` | The `ip-masq-agent` configures iptables rules to implement IP masquerading functionality |

> The resources in these packages are going to be deployed in `kube-system` namespace. Except for the operator.
Expand All @@ -45,6 +45,7 @@ Click on each package to see its full documentation.
| `1.26.x` | :white_check_mark: | No known issues |
| `1.27.x` | :white_check_mark: | No known issues |
| `1.28.x` | :white_check_mark: | No known issues |
| `1.29.x` | :white_check_mark: | No known issues |


Check the [compatibility matrix][compatibility-matrix] for additional information on previous releases of the module.
Expand All @@ -67,7 +68,7 @@ Check the [compatibility matrix][compatibility-matrix] for additional informatio
```yaml
bases:
- name: networking
version: "v1.16.0"
version: "v1.17.0"
```
> See `furyctl` [documentation][furyctl-repo] for additional details about `Furyfile.yml` format.
Expand Down
21 changes: 11 additions & 10 deletions docs/COMPATIBILITY_MATRIX.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
# Compatibility Matrix

| Module Version / Kubernetes Version | 1.24.X | 1.25.X | 1.26.X | 1.27.X | 1.28.X |
| ----------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| v1.10.0 | :white_check_mark: | | | | |
| v1.11.0 | :white_check_mark: | :white_check_mark: | | | |
| v1.12.0 | :white_check_mark: | :white_check_mark: | | | |
| v1.12.1 | :white_check_mark: | :white_check_mark: | | | |
| v1.12.2 | :white_check_mark: | :white_check_mark: | | | |
| v1.14.0 | :white_check_mark: | :white_check_mark: | :white_check_mark: | | |
| v1.15.0 | | :white_check_mark: | :white_check_mark: | :white_check_mark: | |
| v1.16.0 | | | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Module Version / Kubernetes Version | 1.24.X | 1.25.X | 1.26.X | 1.27.X | 1.28.X | 1.29.X |
| ----------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| v1.10.0 | :white_check_mark: | | | | | |
| v1.11.0 | :white_check_mark: | :white_check_mark: | | | | |
| v1.12.0 | :white_check_mark: | :white_check_mark: | | | | |
| v1.12.1 | :white_check_mark: | :white_check_mark: | | | | |
| v1.12.2 | :white_check_mark: | :white_check_mark: | | | | |
| v1.14.0 | :white_check_mark: | :white_check_mark: | :white_check_mark: | | | |
| v1.15.0 | | :white_check_mark: | :white_check_mark: | :white_check_mark: | | |
| v1.16.0 | | | :white_check_mark: | :white_check_mark: | :white_check_mark: | |
| v1.17.0 | | | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |


:white_check_mark: Compatible
Expand Down
32 changes: 32 additions & 0 deletions docs/releases/v1.17.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Networking Core Module Release 1.17.0

Welcome to the latest release of the `Networking` module of [`Kubernetes Fury Distribution`](https://github.com/sighupio/fury-distribution) maintained by team SIGHUP.

This patch release updates some components and adds support to Kubernetes 1.29.

## Component Images 🚢

| Component | Supported Version | Previous Version |
| ----------------- | -------------------------------------------------------------------------------- | ---------------- |
| `calico` | [`v3.27.3`](https://docs.tigera.io/calico/3.27/about/) | `v3.27.0` |
| `cilium` | [`v1.15.2`](https://github.com/cilium/cilium/releases/tag/v1.15.2) | No update |
| `ip-masq` | [`v2.8.0`](https://github.com/kubernetes-sigs/ip-masq-agent/releases/tag/v2.8.0) | No update |
| `tigera-operator` | [`v1.32.7`](https://github.com/tigera/operator/releases/tag/v1.32.7) | `v1.32.3` |

> Please refer the individual release notes to get detailed information on each release.
## Update Guide 🦮

### Process

1. Just deploy as usual:

```bash
kustomize build katalog/calico | kubectl apply -f -
# OR
kustomize build katalog/tigera/on-prem | kubectl apply -f -
# OR
kustomize build katalog/cilium | kubectl apply -f -
```

If you are upgrading from previous versions, please refer to the [`v1.16.0` release notes](https://github.com/sighupio/fury-kubernetes-networking/releases/tag/v1.16.0).
6 changes: 3 additions & 3 deletions katalog/calico/MAINTENANCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ To update the Calico package with upstream, please follow the next steps:
1. Download upstream manifests:

```bash
export CALICO_VERSION=3.27.0
export CALICO_VERSION=3.27.3
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/calico.yaml -o calico-${CALICO_VERSION}.yaml
```

Expand All @@ -20,7 +20,7 @@ Compare the `deploy.yaml` file with the downloaded `calico-${CALICO_VERSION}` fi
3. Update the `kustomization.yaml` file with the right image versions.

```bash
export CALICO_IMAGE_TAG=v3.27.0
export CALICO_IMAGE_TAG=v3.27.3
kustomize edit set image docker.io/calico/kube-controllers=registry.sighup.io/fury/calico/kube-controllers:${CALICO_IMAGE_TAG}
kustomize edit set image docker.io/calico/cni=registry.sighup.io/fury/calico/cni:${CALICO_IMAGE_TAG}
kustomize edit set image docker.io/calico/node=registry.sighup.io/fury/calico/node:${CALICO_IMAGE_TAG}
Expand All @@ -39,7 +39,7 @@ See <https://docs.tigera.io/calico/latest/operations/monitor/monitor-component-m
1. Download the dashboard from upstream:

```bash
export CALICO_VERSION=3.27.0
export CALICO_VERSION=3.27.3
# ⚠️ Assuming $PWD == root of the project
# We take the `felix-dashboard.json` from the downloaded yaml, we are not deploying `typha`, so we don't need its dashboard.
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/grafana-dashboards.yaml | yq '.data["felix-dashboard.json"]' | sed 's/calico-demo-prometheus/prometheus/g' | jq > ./monitoring/dashboards/felix-dashboard.json
Expand Down
6 changes: 3 additions & 3 deletions katalog/calico/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ The deployment of Calico consists of a daemon set running on every node (includi
## Image repository and tag

- calico images:
- `calico/kube-controllers:v3.27.0`.
- `calico/cni:v3.27.0`.
- `calico/node:v3.27.0`.
- `calico/kube-controllers:v3.27.3`.
- `calico/cni:v3.27.3`.
- `calico/node:v3.27.3`.
- calico repositories:
- [https://github.com/projectcalico/kube-controllers](https://github.com/projectcalico/calico/tree/master/kube-controllers).
- [https://github.com/projectcalico/cni-plugin](https://github.com/projectcalico/calico/tree/master/cni-plugin).
Expand Down
6 changes: 3 additions & 3 deletions katalog/calico/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@ namespace: kube-system
images:
- name: docker.io/calico/cni
newName: registry.sighup.io/fury/calico/cni
newTag: v3.27.0
newTag: v3.27.3
- name: docker.io/calico/kube-controllers
newName: registry.sighup.io/fury/calico/kube-controllers
newTag: v3.27.0
newTag: v3.27.3
- name: docker.io/calico/node
newName: registry.sighup.io/fury/calico/node
newTag: v3.27.0
newTag: v3.27.3

# Resources needed for Monitoring
resources:
Expand Down
157 changes: 157 additions & 0 deletions katalog/tests/calico/tigera.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
#!/bin/bash
# Copyright (c) 2024-present SIGHUP s.r.l All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.

# shellcheck disable=SC2154

load ./../helper

@test "Nodes in Not Ready state" {
info
nodes_not_ready() {
kubectl get nodes --no-headers | awk '{print $2}' | uniq | grep -q NotReady
}
run nodes_not_ready
[ "$status" -eq 0 ]
}

@test "Install Prerequisites" {
info
install() {
kubectl apply -f 'https://raw.githubusercontent.com/sighupio/fury-kubernetes-monitoring/v3.1.0/katalog/prometheus-operator/crds/0servicemonitorCustomResourceDefinition.yaml'
kubectl apply -f 'https://raw.githubusercontent.com/sighupio/fury-kubernetes-monitoring/v3.1.0/katalog/prometheus-operator/crds/0prometheusruleCustomResourceDefinition.yaml'
}
run install
[ "$status" -eq 0 ]
}

#
@test "Install Tigera operator and calico operated" {
info
test() {
apply katalog/tigera/on-prem
}
loop_it test 60 5
status=${loop_it_result}
[ "$status" -eq 0 ]
}

@test "Calico Kube Controller is Running" {
info
test() {
kubectl get pods -l k8s-app=calico-kube-controllers -o json -n calico-system |jq '.items[].status.containerStatuses[].ready' | uniq | grep -q true
}
loop_it test 60 5
status=${loop_it_result}
[ "$status" -eq 0 ]
}

@test "Calico Node is Running" {
info
test() {
kubectl get pods -l k8s-app=calico-node -o json -n calico-system |jq '.items[].status.containerStatuses[].ready' | uniq | grep -q true
}
loop_it test 60 5
status=${loop_it_result}
[ "$status" -eq 0 ]
}

@test "Nodes in ready State" {
info
test() {
kubectl get nodes --no-headers | awk '{print $2}' | uniq | grep -q Ready
}
run test
[ "$status" -eq 0 ]
}

@test "Apply whitelist-system-ns GlobalNetworkPolicy" {
info
install() {
kubectl apply -f examples/globalnetworkpolicies/1.whitelist-system-namespace.yml
}
run install
[ "$status" -eq 0 ]
}

@test "Create a non-whitelisted namespace with an app" {
info
install() {
kubectl create ns test-1
kubectl apply -f katalog/tests/calico/resources/echo-server.yaml -n test-1
kubectl wait -n test-1 --for=condition=ready --timeout=120s pod -l app=echoserver
}
run install
[ "$status" -eq 0 ]
}

@test "Test app within the same namespace" {
info
test() {
kubectl create job -n test-1 isolated-test --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
kubectl wait -n test-1 --for=condition=complete --timeout=30s job/isolated-test
}
run test
[ "$status" -eq 0 ]
}

@test "Test app from a system namespace" {
info
test() {
kubectl create job -n kube-system isolated-test --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
kubectl wait -n kube-system --for=condition=complete --timeout=30s job/isolated-test
}
run test
[ "$status" -eq 0 ]
}

@test "Test app from a different namespace" {
info
test() {
kubectl create ns test-1-1
kubectl create job -n test-1-1 isolated-test --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
kubectl wait -n test-1-1 --for=condition=complete --timeout=30s job/isolated-test
}
run test
[ "$status" -eq 0 ]
}

@test "Apply deny-all GlobalNetworkPolicy" {
info
install() {
kubectl apply -f examples/globalnetworkpolicies/2000.deny-all.yml
}
run install
[ "$status" -eq 0 ]
}

@test "Test app from the same namespace (isolated namespace)" {
info
test() {
kubectl create job -n test-1 isolated-test-1 --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
kubectl wait -n test-1 --for=condition=complete --timeout=30s job/isolated-test-1
}
run test
[ "$status" -eq 1 ]
}

@test "Test app from a system namespace (isolated namespace)" {
info
test() {
kubectl create job -n kube-system isolated-test-1 --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
kubectl wait -n kube-system --for=condition=complete --timeout=30s job/isolated-test-1
}
run test
[ "$status" -eq 0 ]
}

@test "Test app from a different namespace (isolated namespace)" {
info
test() {
kubectl create job -n test-1-1 isolated-test-1 --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
kubectl wait -n test-1-1 --for=condition=complete --timeout=30s job/isolated-test-1
}
run test
[ "$status" -eq 1 ]
}
2 changes: 1 addition & 1 deletion katalog/tests/helper.bash
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

apply (){
kustomize build $1 >&2
kustomize build $1 | kubectl apply -f - 2>&3
kustomize build $1 | kubectl apply --server-side -f - 2>&3
}

delete (){
Expand Down
6 changes: 3 additions & 3 deletions katalog/tigera/MAINTENANCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ To update the YAML file, run the following command:

```bash
# assuming katalog/tigera is the root of the repository
export CALICO_VERSION="3.27.0"
export CALICO_VERSION="3.27.3"
curl "https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/tigera-operator.yaml" --output operator/tigera-operator.yaml
```

Expand All @@ -28,7 +28,7 @@ To download the default configuration from upstream and update the file use the

```bash
# assuming katalog/tigera is the root of the repository
export CALICO_VERSION="3.27.0"
export CALICO_VERSION="3.27.3"
curl https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/custom-resources.yaml --output on-prem/custom-resources.yaml
```

Expand All @@ -50,7 +50,7 @@ To get the dashboards you can use the following commands:

```bash
# ⚠️ Assuming $PWD == root of the project
export CALICO_VERSION="3.27.0"
export CALICO_VERSION="3.27.3"
# we split the upstream file and store only the json files
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/grafana-dashboards.yaml | yq '.data["felix-dashboard.json"]' | sed 's/calico-demo-prometheus/prometheus/g' | jq > ./on-prem/monitoring/dashboards/felix-dashboard.json
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/grafana-dashboards.yaml | yq '.data["typha-dashboard.json"]' | sed 's/calico-demo-prometheus/prometheus/g' | jq > ./on-prem/monitoring/dashboards/typa-dashboard.json
Expand Down
19 changes: 17 additions & 2 deletions katalog/tigera/operator/tigera-operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -983,6 +983,13 @@ spec:
Loose]'
pattern: ^(?i)(Disabled|Strict|Loose)?$
type: string
bpfExcludeCIDRsFromNAT:
description: BPFExcludeCIDRsFromNAT is a list of CIDRs that are to
be excluded from NAT resolution so that host can handle them. A
typical usecase is node local DNS cache.
items:
type: string
type: array
bpfExtToServiceConnmark:
description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit
mark that is set on connections from an external client to a local
Expand Down Expand Up @@ -25102,6 +25109,14 @@ rules:
verbs:
- create
- delete
# In addition to the above, the operator should have the ability to delete their own resources during uninstallation.
- apiGroups:
- operator.tigera.io
resources:
- installations
- apiservers
verbs:
- delete
- apiGroups:
- networking.k8s.io
resources:
Expand Down Expand Up @@ -25273,7 +25288,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: tigera-operator
image: quay.io/tigera/operator:v1.32.3
image: quay.io/tigera/operator:v1.32.7
imagePullPolicy: IfNotPresent
command:
- operator
Expand All @@ -25291,7 +25306,7 @@ spec:
- name: OPERATOR_NAME
value: "tigera-operator"
- name: TIGERA_OPERATOR_INIT_IMAGE_VERSION
value: v1.32.3
value: v1.32.7
envFrom:
- configMapRef:
name: kubernetes-services-endpoint
Expand Down

0 comments on commit b626bc6

Please sign in to comment.