Releases: sighupio/fury-distribution
Release v1.31.0
Kubernetes Fury Distribution Release v1.31.0
Welcome to KFD release v1.31.0
.
The distribution is maintained with ❤️ by the team SIGHUP.
New Features since v1.30.0
Installer Updates
- on-premises 📦 installer: v1.31.4
- Added support for Kubernetes 1.31.4
Module updates
No module updates from the last version.
Breaking changes 💔
No breaking changes on this version.
New features 🌟
-
[#320] Custom Lables and Annotations for on-premises nodes: the configuration file for on-premises clusters now supports specifying custom labels and annotations for the control-plane nodes and for the node groups. The labels and annotations specified will be applied to all the nodes in the group (and deleted when removed from the configuration). Usage example:
... spec: kubernetes: masters: hosts: - name: master1 ip: 192.168.66.29 - name: master2 ip: 192.168.66.30 - name: master3 ip: 192.168.66.31 labels: node-role.kubernetes.io/dungeon-master: "" dnd-enabled: "true" annotations: level: "100" nodes: - name: infra hosts: - name: infra1 ip: 192.168.66.32 - name: infra2 ip: 192.168.66.33 - name: infra3 ip: 192.168.66.34 taints: - effect: NoSchedule key: node.kubernetes.io/role value: infra labels: a-label: with-content empty-label: "" label/sighup: "with-slashes" node-role.kubernetes.io/wizard: "" dnd-enabled: "true" annotations: with-spaces: "annotation with spaces" without-spaces: annotation-without-spaces level: "20" - name: worker hosts: - name: worker1 ip: 192.168.66.35 taints: [] labels: node-role.kubernetes.io/barbarian: "" dnd-enabled: "true" label-custom: "with-value" annotations: level: "10" - name: empty-labels-and-annotations hosts: - name: empty1 ip: 192.168.66.50 taints: [] labels: annotations: - name: undefined-labels-and-annotations hosts: - name: undefined1 ip: 192.168.66.51 taints: [] ...
-
[#322] Apply step now uses
kapp
: The manifests apply for distribution phase and kustomize plugins is now done viakapp
instead ofkubectl
.kapp
allows for applying manifests and verifying that everything being installed is functioning correctly. It can also apply CRDs, wait for them to be available, and then apply the referring CRs. This significantly reduces and simplifies the complexity of apply operations, which were previously performed with plain kubectl.
Fixes 🐞
No fixes in this version.
Upgrade procedure
Check the upgrade docs for the detailed procedure.
Release v1.29.5
Kubernetes Fury Distribution Release v1.29.5
Welcome to KFD release v1.29.5
. This patch release also updates Kubernetes from 1.29.3 to 1.29.10 on the OnPremises provider.
The distribution is maintained with ❤️ by the team SIGHUP.
New Features since v1.29.4
Installer Updates
- on-premises 📦 installer: v1.30.6
- Updated etcd default version to 3.5.15
- Updated HAProxy version to 3.0 TLS
- Updated containerd default version to 1.7.23
- Added support for Kubernetes versions 1.30.6, 1.29.10 and 1.28.15
- eks 📦 installer: v3.2.0
- Introduced AMI selection type:
alinux2023
andalinux2
- Fixed eks-managed nodepool node labels
- Introduced AMI selection type:
Module updates
- networking 📦 core module: v2.0.0
- Updated Tigera operator to v1.36.1 (that includes calico v3.29.0)
- Updated Cilium to v1.16.3
- monitoring 📦 core module: v3.3.0
- Updated blackbox-exporter to v0.25.0
- Updated grafana to v11.3.0
- Updated kube-rbac-proxy to v0.18.1
- Updated kube-state-metrics to v2.13.0
- Updated node-exporter to v1.8.2
- Updated prometheus-adapter to v0.12.0
- Updated prometheus-operator to v0.76.2
- Updated prometheus to v2.54.1
- Updated x509-exporter to v3.17.0
- Updated mimir to v2.14.0
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- logging 📦 core module: v4.0.0
- Updated opensearch and opensearch-dashboards to v2.17.1
- Updated logging-operator to v4.10.0
- Updated loki to v2.9.10
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- ingress 📦 core module: v3.0.1
- Updated cert-manager to v1.16.1
- Updated external-dns to v0.15.0
- Updated forecastle to v1.0.145
- Updated nginx to v1.11.3
- auth 📦 core module: v0.4.0
- Updated dex to v2.41.1
- Updated pomerium to v0.27.1
- dr 📦 core module: v3.0.0
- Updated velero to v1.15.0
- Updated all velero plugins to v1.11.0
- Added snapshot-controller v8.0.1
- tracing 📦 core module: v1.1.0
- Updated tempo to v2.6.0
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- opa 📦 core module: v1.13.0
- Updated gatekeeper to v3.17.1
- Updated gatekeeper-policy-manager to v1.0.13
- Updated kyverno to v1.12.6
- aws 📦 module: v4.3.0
- Updated cluster-autoscaler to v1.30.0
- Updated snapshot-controller to v8.1.0
- Updated aws-load-balancer-controller to v2.10.0
- Updated node-termination-handler to v1.22.0
Breaking changes 💔
- Loki store and schema change: A new store and schema has been introduced in order to improve efficiency, speed and scalability of Loki clusters. See "New features" below for more details.
- DR schema change: A new format for the schedule customization has been introduced to improve the usability. See "New Features" section below for more details.
- Kyverno validation failure action: Kyverno has deprecated
audit
andenforce
as valid options for thevalidationFailureAction
, valid options are nowAudit
andEnforce
, in title case. Adjust your.spec.distribution.modules.policy.kyverno.validationFailureAction
value accordingly.
New features 🌟
-
New option for Logging: Loki's configuration has been extended to accommodate a new
tsdbStartDate
required option to allow a migration towards TSDB and schema v13 storage (note: this is a breaking change):... spec: distribution: modules: logging: loki: tsdbStartDate: "2024-11-18" ...
tsdbStartDate
(required): a string inISO 8601
date format that represents the day starting from which Loki will record logs with the new store and schema.
ℹ️ Note: Loki will assume the start of the day on the UTC midnight of the specified day.
-
Improved configurable schedules for DR backups: the schedule configuration has been updated to enhance the usability of schedule customization (note: this is a breaking change):
... spec: distribution: modules: dr: velero: schedules: install: true definitions: manifests: schedule: "*/15 * * * *" ttl: "720h0m0s" full: schedule: "0 1 * * *" ttl: "720h0m0s" snapshotMoveData: false ...
-
DR snapshotMoveData options for full schedule: a new parameter has been introduced in the velero
full
schedule to enable thesnapshotMoveData
feature. This feature allows data captured from a snapshot to be copied to the object storage location. Important: Setting this parameter totrue
will cause Velero to upload all data from the snapshotted volumes to S3 using Kopia. While backups are deduplicated, significant storage usage is still expected. To enable this use the following parameter in the full schedule configuration:... spec: distribution: modules: dr: velero: schedules: install: true definitions: full: snapshotMoveData: true ...
General example to enable Volume Snapshotting on rook-ceph (from our storage add-on module):
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: velero-snapclass
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Retain
deletionPolicy: Retain
is important because if the volume snapshot is deleted from the namespace, the cluster wide volumesnapshotcontent
CR will be preserved, maintaining the snapshot on the storage that the cluster is using.
NOTE: For EKSCluster provider, a default VolumeSnapshotClass is created automatically.
-
DR optional snapshot-controller installation: To leverage VolumeSnapshots on the OnPremises and KFDDistribution providers, a new option on velero has been added to install the snapshot-controller component. Before activating this parameter make sure that in your cluster there is not another snapshot-controller component deployed. By default this parameter is
false
.... spec: distribution: modules: dr: velero: snapshotController: install: true ...
-
Prometheus ScrapeConfigs: the Monitoring module now enables by default the
scrapeConfig
CRDs from the Prometheus Operator. All the scrapeConfig objects present in the cluster will now be detected by the operator.ScrapeConfig
objects are used to instruct Prometheus to scrape specific endpoints that could be outside the cluster. -
Components Hardening: we hardened the security context of several components, improving the out-of-the-box security of the distribution.
-
On-premises minimal clusters: it is now possible to create clusters with only control-plane nodes, for minimal clusters installations that need to handle minimal workloads.
-
Helm Plugins: Helm plugins now allow disabling validation at installation time with the
disableValidationOnInstall
option. This can be useful when installing Helm charts that fail the diff step on a first installation, for example. -
Network Policies (experimental 🧪): a new experimental feature is introduced in this version. You can now enable the installation of network policies that will restrict the traffic across all the infrastructural namespaces of KFD to just the access needed for its proper functioning and denying the rest of it. Improving the overall security of the cluster. This experimental feature is only available in OnPremises cluster at the moment. Read more in the Pull Request introducing the feature and in the relative documentation.
-
Global CVE patched images for core modules: This distribution version includes images that have been patched for OS vulnerabilities (CVE). To use these patched images, select the following option:
... spec: distribution: common: registry: registry.sighup.io/fury-secured ...
Fixes 🐞
- Improved Configuration Schema documentation: documentation for the configuration schemas was lacking, we great...
Release v1.28.5
Kubernetes Fury Distribution Release v1.28.5
Welcome to KFD release v1.28.5
. This patch release also updates Kubernetes from 1.28.7 to 1.28.15 on the OnPremises provider.
The distribution is maintained with ❤️ by the team SIGHUP.
New Features since v1.28.4
Installer Updates
- on-premises 📦 installer: v1.30.6
- Updated etcd default version to 3.5.15
- Updated HAProxy version to 3.0 TLS
- Updated containerd default version to 1.7.23
- Added support for Kubernetes versions 1.30.6, 1.29.10 and 1.28.15
- eks 📦 installer: v3.2.0
- Introduced AMI selection type:
alinux2023
andalinux2
- Fixed eks-managed nodepool node labels
- Introduced AMI selection type:
Module updates
- networking 📦 core module: v2.0.0
- Updated Tigera operator to v1.36.1 (that includes calico v3.29.0)
- Updated Cilium to v1.16.3
- monitoring 📦 core module: v3.3.0
- Updated blackbox-exporter to v0.25.0
- Updated grafana to v11.3.0
- Updated kube-rbac-proxy to v0.18.1
- Updated kube-state-metrics to v2.13.0
- Updated node-exporter to v1.8.2
- Updated prometheus-adapter to v0.12.0
- Updated prometheus-operator to v0.76.2
- Updated prometheus to v2.54.1
- Updated x509-exporter to v3.17.0
- Updated mimir to v2.14.0
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- logging 📦 core module: v4.0.0
- Updated opensearch and opensearch-dashboards to v2.17.1
- Updated logging-operator to v4.10.0
- Updated loki to v2.9.10
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- ingress 📦 core module: v3.0.1
- Updated cert-manager to v1.16.1
- Updated external-dns to v0.15.0
- Updated forecastle to v1.0.145
- Updated nginx to v1.11.3
- auth 📦 core module: v0.4.0
- Updated dex to v2.41.1
- Updated pomerium to v0.27.1
- dr 📦 core module: v3.0.0
- Updated velero to v1.15.0
- Updated all velero plugins to v1.11.0
- Added snapshot-controller v8.0.1
- tracing 📦 core module: v1.1.0
- Updated tempo to v2.6.0
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- opa 📦 core module: v1.13.0
- Updated gatekeeper to v3.17.1
- Updated gatekeeper-policy-manager to v1.0.13
- Updated kyverno to v1.12.6
- aws 📦 module: v4.3.0
- Updated cluster-autoscaler to v1.30.0
- Updated snapshot-controller to v8.1.0
- Updated aws-load-balancer-controller to v2.10.0
- Updated node-termination-handler to v1.22.0
Breaking changes 💔
- Loki store and schema change: A new store and schema has been introduced in order to improve efficiency, speed and scalability of Loki clusters. See "New features" below for more details.
- DR schema change: A new format for the schedule customization has been introduced to improve the usability. See "New Features" section below for more details.
- Kyverno validation failure action: Kyverno has deprecated
audit
andenforce
as valid options for thevalidationFailureAction
, valid options are nowAudit
andEnforce
, in title case. Adjust your.spec.distribution.modules.policy.kyverno.validationFailureAction
value accordingly.
New features 🌟
-
New option for Logging: Loki's configuration has been extended to accommodate a new
tsdbStartDate
required option to allow a migration towards TSDB and schema v13 storage (note: this is a breaking change):... spec: distribution: modules: logging: loki: tsdbStartDate: "2024-11-18" ...
tsdbStartDate
(required): a string inISO 8601
date format that represents the day starting from which Loki will record logs with the new store and schema.
ℹ️ Note: Loki will assume the start of the day on the UTC midnight of the specified day.
-
Improved configurable schedules for DR backups: the schedule configuration has been updated to enhance the usability of schedule customization (note: this is a breaking change):
... spec: distribution: modules: dr: velero: schedules: install: true definitions: manifests: schedule: "*/15 * * * *" ttl: "720h0m0s" full: schedule: "0 1 * * *" ttl: "720h0m0s" snapshotMoveData: false ...
-
DR snapshotMoveData options for full schedule: a new parameter has been introduced in the velero
full
schedule to enable thesnapshotMoveData
feature. This feature allows data captured from a snapshot to be copied to the object storage location. Important: Setting this parameter totrue
will cause Velero to upload all data from the snapshotted volumes to S3 using Kopia. While backups are deduplicated, significant storage usage is still expected. To enable this use the following parameter in the full schedule configuration:... spec: distribution: modules: dr: velero: schedules: install: true definitions: full: snapshotMoveData: true ...
General example to enable Volume Snapshotting on rook-ceph (from our storage add-on module):
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: velero-snapclass
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Retain
deletionPolicy: Retain
is important because if the volume snapshot is deleted from the namespace, the cluster wide volumesnapshotcontent
CR will be preserved, maintaining the snapshot on the storage that the cluster is using.
NOTE: For EKSCluster provider, a default VolumeSnapshotClass is created automatically.
-
DR optional snapshot-controller installation: To leverage VolumeSnapshots on the OnPremises and KFDDistribution providers, a new option on velero has been added to install the snapshot-controller component. Before activating this parameter make sure that in your cluster there is not another snapshot-controller component deployed. By default this parameter is
false
.... spec: distribution: modules: dr: velero: snapshotController: install: true ...
-
Prometheus ScrapeConfigs: the Monitoring module now enables by default the
scrapeConfig
CRDs from the Prometheus Operator. All the scrapeConfig objects present in the cluster will now be detected by the operator.ScrapeConfig
objects are used to instruct Prometheus to scrape specific endpoints that could be outside the cluster. -
Components Hardening: we hardened the security context of several components, improving the out-of-the-box security of the distribution.
-
On-premises minimal clusters: it is now possible to create clusters with only control-plane nodes, for minimal clusters installations that need to handle minimal workloads.
-
Helm Plugins: Helm plugins now allow disabling validation at installation time with the
disableValidationOnInstall
option. This can be useful when installing Helm charts that fail the diff step on a first installation, for example. -
Network Policies (experimental 🧪): a new experimental feature is introduced in this version. You can now enable the installation of network policies that will restrict the traffic across all the infrastructural namespaces of KFD to just the access needed for its proper functioning and denying the rest of it. Improving the overall security of the cluster. This experimental feature is only available in OnPremises cluster at the moment. Read more in the Pull Request introducing the feature and in the relative documentation.
-
Global CVE patched images for core modules: This distribution version includes images that have been patched for OS vulnerabilities (CVE). To use these patched images, select the following option:
... spec: distribution: common: registry: registry.sighup.io/fury-secured ...
Fixes 🐞
- Improved Configuration Schema documentation: documentation for the configuration schemas was lacking, we great...
Release v1.30.0
Kubernetes Fury Distribution Release v1.30.0
Welcome to KFD release v1.30.0
. This is the first release of KFD supporting Kubernetes 1.30.
The distribution is maintained with ❤️ by the team SIGHUP.
New Features since v1.29.4
Installer Updates
- on-premises 📦 installer: v1.30.6
- Updated etcd default version to 3.5.15
- Updated HAProxy version to 3.0 TLS
- Updated containerd default version to 1.7.23
- Added support for Kubernetes versions 1.30.6, 1.29.10 and 1.28.15
- eks 📦 installer: v3.2.0
- Introduced AMI selection type:
alinux2023
andalinux2
- Fixed eks-managed nodepool node labels
- Introduced AMI selection type:
Module updates
- networking 📦 core module: v2.0.0
- Updated Tigera operator to v1.36.1 (that includes calico v3.29.0)
- Updated Cilium to v1.16.3
- monitoring 📦 core module: v3.3.0
- Updated blackbox-exporter to v0.25.0
- Updated grafana to v11.3.0
- Updated kube-rbac-proxy to v0.18.1
- Updated kube-state-metrics to v2.13.0
- Updated node-exporter to v1.8.2
- Updated prometheus-adapter to v0.12.0
- Updated prometheus-operator to v0.76.2
- Updated prometheus to v2.54.1
- Updated x509-exporter to v3.17.0
- Updated mimir to v2.14.0
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- logging 📦 core module: v4.0.0
- Updated opensearch and opensearch-dashboards to v2.17.1
- Updated logging-operator to v4.10.0
- Updated loki to v2.9.10
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- ingress 📦 core module: v3.0.1
- Updated cert-manager to v1.16.1
- Updated external-dns to v0.15.0
- Updated forecastle to v1.0.145
- Updated nginx to v1.11.3
- auth 📦 core module: v0.4.0
- Updated dex to v2.41.1
- Updated pomerium to v0.27.1
- dr 📦 core module: v3.0.0
- Updated velero to v1.15.0
- Updated all velero plugins to v1.11.0
- Added snapshot-controller v8.0.1
- tracing 📦 core module: v1.1.0
- Updated tempo to v2.6.0
- Updated minio to version RELEASE.2024-10-13T13-34-11Z
- opa 📦 core module: v1.13.0
- Updated gatekeeper to v3.17.1
- Updated gatekeeper-policy-manager to v1.0.13
- Updated kyverno to v1.12.6
- aws 📦 module: v4.3.0
- Updated cluster-autoscaler to v1.30.0
- Updated snapshot-controller to v8.1.0
- Updated aws-load-balancer-controller to v2.10.0
- Updated node-termination-handler to v1.22.0
Breaking changes 💔
- Loki store and schema change: A new store and schema has been introduced in order to improve efficiency, speed and scalability of Loki clusters. See "New features" below for more details.
- DR schema change: A new format for the schedule customization has been introduced to improve the usability. See "New Features" section below for more details.
- Kyverno validation failure action: Kyverno has deprecated
audit
andenforce
as valid options for thevalidationFailureAction
, valid options are nowAudit
andEnforce
, in title case. Adjust your.spec.distribution.modules.policy.kyverno.validationFailureAction
value accordingly.
New features 🌟
-
New option for Logging: Loki's configuration has been extended to accommodate a new
tsdbStartDate
required option to allow a migration towards TSDB and schema v13 storage (note: this is a breaking change):... spec: distribution: modules: logging: loki: tsdbStartDate: "2024-11-18" ...
tsdbStartDate
(required): a string inISO 8601
date format that represents the day starting from which Loki will record logs with the new store and schema.
ℹ️ Note: Loki will assume the start of the day on the UTC midnight of the specified day.
-
Improved configurable schedules for DR backups: the schedule configuration has been updated to enhance the usability of schedule customization (note: this is a breaking change):
... spec: distribution: modules: dr: velero: schedules: install: true definitions: manifests: schedule: "*/15 * * * *" ttl: "720h0m0s" full: schedule: "0 1 * * *" ttl: "720h0m0s" snapshotMoveData: false ...
-
DR snapshotMoveData options for full schedule: a new parameter has been introduced in the velero
full
schedule to enable thesnapshotMoveData
feature. This feature allows data captured from a snapshot to be copied to the object storage location. Important: Setting this parameter totrue
will cause Velero to upload all data from the snapshotted volumes to S3 using Kopia. While backups are deduplicated, significant storage usage is still expected. To enable this use the following parameter in the full schedule configuration:... spec: distribution: modules: dr: velero: schedules: install: true definitions: full: snapshotMoveData: true ...
General example to enable Volume Snapshotting on rook-ceph (from our storage add-on module):
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: velero-snapclass
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Retain
deletionPolicy: Retain
is important because if the volume snapshot is deleted from the namespace, the cluster wide volumesnapshotcontent
CR will be preserved, maintaining the snapshot on the storage that the cluster is using.
NOTE: For EKSCluster provider, a default VolumeSnapshotClass is created automatically.
-
DR optional snapshot-controller installation: To leverage VolumeSnapshots on the OnPremises and KFDDistribution providers, a new option on velero has been added to install the snapshot-controller component. Before activating this parameter make sure that in your cluster there is not another snapshot-controller component deployed. By default this parameter is
false
.... spec: distribution: modules: dr: velero: snapshotController: install: true ...
-
Prometheus ScrapeConfigs: the Monitoring module now enables by default the
scrapeConfig
CRDs from the Prometheus Operator. All the scrapeConfig objects present in the cluster will now be detected by the operator.ScrapeConfig
objects are used to instruct Prometheus to scrape specific endpoints that could be outside the cluster. -
Components Hardening: we hardened the security context of several components, improving the out-of-the-box security of the distribution.
-
On-premises minimal clusters: it is now possible to create clusters with only control-plane nodes, for minimal clusters installations that need to handle minimal workloads.
-
Helm Plugins: Helm plugins now allow disabling validation at installation time with the
disableValidationOnInstall
option. This can be useful when installing Helm charts that fail the diff step on a first installation, for example. -
Network Policies (experimental 🧪): a new experimental feature is introduced in this version. You can now enable the installation of network policies that will restrict the traffic across all the infrastructural namespaces of KFD to just the access needed for its proper functioning and denying the rest of it. Improving the overall security of the cluster. This experimental feature is only available in OnPremises cluster at the moment. Read more in the Pull Request introducing the feature and in the relative documentation.
-
Global CVE patched images for core modules: This distribution version includes images that have been patched for OS vulnerabilities (CVE). To use these patched images, select the following option:
... spec: distribution: common: registry: registry.sighup.io/fur...
Release v1.29.4
Kubernetes Fury Distribution Release v1.29.4
Welcome to KFD release v1.29.4
.
The distribution is maintained with ❤️ by the team SIGHUP it is battle tested in production environments.
New Features since v1.29.3
Installer Updates
No changes
Module updates
No changes
New features 🌟
-
Configurable distribution registry: Now the registry used by the distribution can be configured. An example configuration:
spec: distribution: common: registry: myregistry.mydomain.ext
-
Configurable on-premises registry: Now the registry used by the on-premises kind can be configured. An example configuration:
spec: kubernetes: advanced: registry: myregistry.mydomain.ext
Fixes 🐞
No changes
Upgrade procedure
Check the upgrade docs for the detailed procedure.
Release v1.28.4
Kubernetes Fury Distribution Release v1.28.4
Welcome to KFD release v1.28.4
.
The distribution is maintained with ❤️ by the team SIGHUP it is battle tested in production environments.
New Features since v1.28.3
Installer Updates
No changes
Module updates
No changes
New features 🌟
-
Configurable distribution registry: Now the registry used by the distribution can be configured. An example configuration:
spec: distribution: common: registry: myregistry.mydomain.ext
-
Configurable on-premises registry: Now the registry used by the on-premises kind can be configured. An example configuration:
spec: kubernetes: advanced: registry: myregistry.mydomain.ext
Fixes 🐞
No changes
Upgrade procedure
Check the upgrade docs for the detailed procedure.
Release v1.27.9
Kubernetes Fury Distribution Release v1.27.9
Welcome to KFD release v1.27.9
.
The distribution is maintained with ❤️ by the team SIGHUP it is battle tested in production environments.
New Features since v1.27.8
Installer Updates
No changes
Module updates
No changes
New features 🌟
-
Configurable distribution registry: Now the registry used by the distribution can be configured. An example configuration:
spec: distribution: common: registry: myregistry.mydomain.ext
-
Configurable on-premises registry: Now the registry used by the on-premises kind can be configured. An example configuration:
spec: kubernetes: advanced: registry: myregistry.mydomain.ext
Fixes 🐞
No changes
Upgrade procedure
Check the upgrade docs for the detailed procedure.
Release v1.28.3
Kubernetes Fury Distribution Release v1.28.3
Welcome to KFD release v1.28.3
.
The distribution is maintained with ❤️ by the team SIGHUP it is battle tested in production environments.
New Features since v1.28.2
Installer Updates
No changes
Module updates
New features 🌟
-
AUTH configurable expiration: Now Dex can be configured to have a custom expiration for ID tokens and signing keys. An example configuration:
... auth: dex: expiry: signingKeys: "6h" idTokens: "24h" ...
Fixes 🐞
- Ingress NGINX Controller: the updated version of the Ingress NGINX Controller fixes the CVE-2024-7646.
Upgrade procedure
Check the upgrade docs for the detailed procedure.
Release v1.27.8
Kubernetes Fury Distribution Release v1.27.8
Welcome to KFD release v1.27.8
.
The distribution is maintained with ❤️ by the team SIGHUP it is battle tested in production environments.
New Features since v1.27.7
Installer Updates
No changes
Module updates
New features 🌟
-
AUTH configurable expiration: Now Dex can be configured to have a custom expiration for ID tokens and signing keys. An example configuration:
... auth: dex: expiry: signingKeys: "6h" idTokens: "24h" ...
Fixes 🐞
- Ingress NGINX Controller: the updated version of the Ingress NGINX Controller fixes the CVE-2024-7646.
Upgrade procedure
Check the upgrade docs for the detailed procedure.
Release v1.29.3
Kubernetes Fury Distribution Release v1.29.3
Welcome to KFD release v1.29.3
.
The distribution is maintained with ❤️ by the team SIGHUP it is battle tested in production environments.
New Features since v1.29.2
Installer Updates
No changes
Module updates
New features 🌟
- AUTH configurable expiration: Now Dex can be configured to have a custom expiration for ID tokens and signing keys. An example configuration:
... auth: dex: expiry: signingKeys: "6h" idTokens: "24h" ...
Fixes 🐞
- Ingress NGINX Controller: the updated version of the Ingress NGINX Controller fixes the CVE-2024-7646.
Upgrade procedure
Check the upgrade docs for the detailed procedure.