For more information on the tooling versions expected in the project, see
versions.yaml
.
This tutorial shows how to configure a fully working e2e test setup including the following components:
- Lifecycle Manager
- Runtime Watcher on a remote cluster
template-operator
on a remote cluster as an example
This setup is deployed with the following security features enabled:
- Strict mTLS connection between Kyma Control Plane (KCP) and SKR clusters
- SAN Pinning (SAN of client TLS certificate needs to match the DNS annotation of a corresponding Kyma CR)
The following tooling is required in the versions defined in versions.yaml
:
- cmctl (cert-manager)
- docker
- go
- golangci-lint
- istioctl
- k3d
- kubectl
- kustomize
- modulectl
- yq
Execute the following scripts from the project root.
Create local test clusters for SKR and KCP.
K8S_VERSION=$(yq e '.k8s' ./versions.yaml)
CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml)
./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION
Install the CRDs to the KCP cluster.
./scripts/tests/install_crds.sh
Deploy a built image from the registry, e.g. the latest
image from the prod
registry.
REGISTRY=prod
TAG=latest
./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG
OR build a new image from the local sources, push it to the local KCP registry and deploy it.
./scripts/tests/deploy_klm_from_sources.sh
SKR_HOST=host.k3d.internal
./scripts/tests/deploy_kyma.sh $SKR_HOST
Verify Kyma is Ready in KCP (takes roughly 1-2 minutes).
kubectl config use-context k3d-kcp
kubectl get kyma/kyma-sample -n kcp-system
Verify Kyma is Ready in SKR (takes roughly 1-2 minutes).
kubectl config use-context k3d-skr
kubectl get kyma/default -n kyma-system
Build it locally and deploy it.
cd <template-operator-repository>
make build-manifests
modulectl create --config-file ./module-config.yaml --registry http://localhost:5111 --insecure
kubectl config use-context k3d-kcp
# repository URL is localhost:5111 on the host machine but must be k3d-kcp-registry.localhost:5000 within the cluster
yq e '.spec.descriptor.component.repositoryContexts[0].baseUrl = "k3d-kcp-registry.localhost:5000"' ./template.yaml | kubectl apply -f -
MT_VERSION=$(yq e '.spec.version' ./template.yaml)
cd <lifecycle-manager-repository>
./scripts/tests/deploy_modulereleasemeta.sh template-operator regular:$MT_VERSION
Add the module.
kubectl config use-context k3d-skr
kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f -
Verify if the module becomes ready (takes roughly 1-2 minutes).
kubectl config use-context k3d-skr
kubectl get kyma/default -n kyma-system -o wide
To remove the module again.
kubectl config use-context k3d-skr
kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f -
Check the conditions of the Kyma.
SKRWebhook
to determine if the webhook has been installed to the SKRModuleCatalog
to determine if the ModuleTemplates and ModuleReleaseMetas haven been synced to the SKR clusterModules
to determine if the added modules are ready
kubectl config use-context k3d-kcp
kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions'
Flick the channel to trigger an event.
kubectl config use-context k3d-skr
kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' | kubectl apply -f -
kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f -
Verify if lifecyle-manger received the event on KCP.
kubectl config use-context k3d-kcp
kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR"
Remove the local SKR and KCP test clusters.
k3d cluster rm kcp skr