Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry-pick PR #382 to release-v1.2 #421

Closed
wants to merge 6 commits into from

Conversation

thunderboltsid
Copy link
Contributor

Cherry-Pick Details

This required two changes:
- Remove hostAliases from kube-vip podspec
    This is addressed by adding entries directly to the /etc/hosts
- Do a super-admin.conf switcheroo for the kube-vip static pod
    Add pre and post kubeadm commands for handling kubernetes
    versions v1.29.0+. The prekubeadm command checks if kubeadm
    init has been run and if it is, it replaces the kubeconfig
    hostPath in kube-vip static pod from admin.conf to the
    super-admin.conf. The postkubeadm command checks if
    kubeadm init has been run and if it is, it changes the
    hostPath in kube-vip static pod from super-admin.conf
    back to admin.conf.
@thunderboltsid
Copy link
Contributor Author

/retest

@thunderboltsid
Copy link
Contributor Author

thunderboltsid commented Apr 29, 2024

@thunderboltsid: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-capx-controller-upgrade 8b211ed link false /test e2e-capx-controller-upgrade
ci/prow/e2e-nutanix-features 8b211ed link false /test e2e-nutanix-features
ci/prow/e2e-k8s-upgrade 8b211ed link false /test e2e-k8s-upgrade
Full PR test history. Your PR dashboard.

We are expecting e2e-capx-controller-upgrade to fail as this make target does not exist in this release branch.
The two tests failing in e2e-nutanix-features were w/ nutanix-client and ccm labels. The tests with nutanix-client labels are passing locally for me.

$ LABEL_FILTERS=nutanix-client make test-e2e-calico
CNI="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml" /Library/Developer/CommandLineTools/usr/bin/make test-e2e
KO_DOCKER_REPO=ko.local /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ko-v0.11.2 build -B --platform=linux/amd64 -t e2e -L .
2024/04/29 11:18:09 Using base gcr.io/distroless/static:nonroot@sha256:e9ac71e2b8e279a8372741b7a0293afda17650d926900233ec3a7b2b7c22a246 for github.com/nutanix-cloud-native/cluster-api-provider-nutanix
2024/04/29 11:18:09 Building github.com/nutanix-cloud-native/cluster-api-provider-nutanix for linux/amd64
2024/04/29 11:18:12 Loading ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
2024/04/29 11:18:14 Loaded ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
2024/04/29 11:18:14 Adding tag e2e
2024/04/29 11:18:14 Added tag e2e
ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
docker tag ko.local/cluster-api-provider-nutanix:e2e ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/base > templates/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/csi > templates/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/ccm > templates/cluster-template-ccm.yaml
mkdir -p /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts
NUTANIX_LOG_LEVEL=debug /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ginkgo-v2.1.4 -v \
		--trace \
		--progress \
		--tags=e2e \
		--label-filter="!only-for-validation && nutanix-client" \
		--skip=""clusterctl-Upgrade"" \
		--nodes=1 \
		--no-color=false \
		--output-dir="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		--junit-report="junit.e2e_suite.1.xml" \
		--timeout="24h" \
		--always-emit-ginkgo-writer \
		 ./test/e2e -- \
		-e2e.artifacts-folder="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		-e2e.config="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" \
		-e2e.skip-resource-cleanup=false \
		-e2e.use-existing-cluster=false
Running Suite: capx-e2e - /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e
========================================================================================================================
Random Seed: 1714382301

Will run 4 of 32 specs
------------------------------
[SynchronizedBeforeSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:119
  STEP: Initializing a runtime.Scheme with all the GVK relevant for this test @ 04/29/24 11:18:26.312
  STEP: Loading the e2e test configuration from "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" @ 04/29/24 11:18:26.312
  STEP: Creating a clusterctl local repository into "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" @ 04/29/24 11:18:26.313
  STEP: Reading the ClusterResourceSet manifest /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml @ 04/29/24 11:18:26.313
  STEP: Setting up the bootstrap cluster @ 04/29/24 11:18:38.603
  STEP: Creating the bootstrap cluster @ 04/29/24 11:18:38.603
  INFO: Creating a kind cluster with name "test-c64f1l"
Creating cluster "test-c64f1l" ...
 • Ensuring node image (kindest/node:v1.23.6) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.23.6) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
  INFO: The kubeconfig file for the kind cluster is /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind3010620704
  INFO: Loading image: "ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e"
  INFO: Image ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1 is present in local container image cache
  STEP: Initializing the bootstrap cluster @ 04/29/24 11:18:54.269
  INFO: clusterctl init --config /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind3010620704 --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix
  INFO: Waiting for provider controllers to be running
  STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available @ 04/29/24 11:19:56.517
  INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6d4987b5bf-2t6l8, container manager
  STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available @ 04/29/24 11:19:56.525
  INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-67c7668b94-6htng, container manager
  STEP: Waiting for deployment capi-system/capi-controller-manager to be available @ 04/29/24 11:19:56.532
  INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-86464d4bfd-95hjp, container manager
  STEP: Waiting for deployment capx-system/capx-controller-manager to be available @ 04/29/24 11:19:56.541
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-29nth, container kube-rbac-proxy
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-29nth, container manager
[SynchronizedBeforeSuite] PASSED [90.628 seconds]
------------------------------
S
------------------------------
Nutanix client Create a cluster without credentialRef (should fail) [capx-feature-test, nutanix-client, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/nutanix_client_test.go:77
  STEP: Creating a namespace for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:19:58.394
  INFO: Creating namespace cluster-ntnx-client-ysdt27
  INFO: Creating event watcher for namespace "cluster-ntnx-client-ysdt27"
  STEP: Creating NutanixCluster resource without credentialRef @ 04/29/24 11:19:58.424
  STEP: Creating a workload cluster @ 04/29/24 11:19:58.442
  INFO: clusterctl config cluster cluster-ntnx-client-96yn16 --infrastructure (default) --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor no-nutanix-cluster
configmap/user-ca-bundle created
configmap/cni-cluster-ntnx-client-96yn16-crs-cni created
clusterresourceset.addons.cluster.x-k8s.io/cluster-ntnx-client-96yn16-crs-cni created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-ntnx-client-96yn16-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-ntnx-client-96yn16 created
machinedeployment.cluster.x-k8s.io/cluster-ntnx-client-96yn16-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-ntnx-client-96yn16-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-ntnx-client-96yn16-kcp created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-ntnx-client-96yn16-mt-0 created

  STEP: Checking CredentialRefSecretOwnerSet condition is false @ 04/29/24 11:19:59.287
  STEP: PASSED! @ 04/29/24 11:20:09.325
  STEP: Dumping logs from the "cluster-ntnx-client-96yn16" workload cluster @ 04/29/24 11:20:09.325
Failed to get logs for Machine cluster-ntnx-client-96yn16-wmd-59d8fc9d57xfc5zc-fqzj9, Cluster cluster-ntnx-client-ysdt27/cluster-ntnx-client-96yn16: error creating container exec: Error response from daemon: No such container: cluster-ntnx-client-96yn16-wmd-59d8fc9d57xfc5zc-fqzj9
  STEP: Dumping all the Cluster API resources in the "cluster-ntnx-client-ysdt27" namespace @ 04/29/24 11:20:09.377
  STEP: Deleting cluster cluster-ntnx-client-ysdt27/cluster-ntnx-client-96yn16 @ 04/29/24 11:20:09.483
  STEP: Deleting cluster cluster-ntnx-client-96yn16 @ 04/29/24 11:20:09.488
  INFO: Waiting for the Cluster cluster-ntnx-client-ysdt27/cluster-ntnx-client-96yn16 to be deleted
  STEP: Waiting for cluster cluster-ntnx-client-96yn16 to be deleted @ 04/29/24 11:20:09.493
  STEP: Deleting namespace used for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:20:19.501
  INFO: Deleting namespace cluster-ntnx-client-ysdt27
• [22.586 seconds]
------------------------------
Nutanix client Create a cluster without prismCentral attribute (use default credentials) [capx-feature-test, nutanix-client, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/nutanix_client_test.go:135
  STEP: Creating a namespace for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:20:20.309
  INFO: Creating namespace cluster-ntnx-client-t023yi
  INFO: Creating event watcher for namespace "cluster-ntnx-client-t023yi"
  STEP: Creating NutanixCluster resource without credentialRef @ 04/29/24 11:20:20.337
  STEP: Creating a workload cluster @ 04/29/24 11:20:20.357
  INFO: clusterctl config cluster cluster-ntnx-client-klufnx --infrastructure (default) --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor no-nutanix-cluster
configmap/user-ca-bundle created
configmap/cni-cluster-ntnx-client-klufnx-crs-cni created
clusterresourceset.addons.cluster.x-k8s.io/cluster-ntnx-client-klufnx-crs-cni created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-ntnx-client-klufnx-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-ntnx-client-klufnx created
machinedeployment.cluster.x-k8s.io/cluster-ntnx-client-klufnx-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-ntnx-client-klufnx-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-ntnx-client-klufnx-kcp created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-ntnx-client-klufnx-mt-0 created

  STEP: Checking cluster prism client init condition is true @ 04/29/24 11:20:20.701
  STEP: PASSED! @ 04/29/24 11:20:40.765
  STEP: Dumping logs from the "cluster-ntnx-client-klufnx" workload cluster @ 04/29/24 11:20:40.765
Failed to get logs for Machine cluster-ntnx-client-klufnx-kcp-nvrpb, Cluster cluster-ntnx-client-t023yi/cluster-ntnx-client-klufnx: error creating container exec: Error response from daemon: No such container: cluster-ntnx-client-klufnx-kcp-nvrpb
Failed to get logs for Machine cluster-ntnx-client-klufnx-wmd-755465768xcpgkj-wf28m, Cluster cluster-ntnx-client-t023yi/cluster-ntnx-client-klufnx: error creating container exec: Error response from daemon: No such container: cluster-ntnx-client-klufnx-wmd-755465768xcpgkj-wf28m
  STEP: Dumping all the Cluster API resources in the "cluster-ntnx-client-t023yi" namespace @ 04/29/24 11:20:40.819
  STEP: Deleting cluster cluster-ntnx-client-t023yi/cluster-ntnx-client-klufnx @ 04/29/24 11:20:40.919
  STEP: Deleting cluster cluster-ntnx-client-klufnx @ 04/29/24 11:20:40.925
  INFO: Waiting for the Cluster cluster-ntnx-client-t023yi/cluster-ntnx-client-klufnx to be deleted
  STEP: Waiting for cluster cluster-ntnx-client-klufnx to be deleted @ 04/29/24 11:20:40.929
  STEP: Deleting namespace used for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:22:21.016
  INFO: Deleting namespace cluster-ntnx-client-t023yi
• [121.515 seconds]
------------------------------
Nutanix client Create a cluster without secret and add it later [capx-feature-test, nutanix-client, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/nutanix_client_test.go:179
  STEP: Creating a namespace for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:22:22.367
  INFO: Creating namespace cluster-ntnx-client-idmolu
  INFO: Creating event watcher for namespace "cluster-ntnx-client-idmolu"
  STEP: Creating a workload cluster @ 04/29/24 11:22:22.393
  INFO: clusterctl config cluster cluster-ntnx-client-5cx6ww --infrastructure (default) --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor no-secret
configmap/user-ca-bundle created
configmap/cni-cluster-ntnx-client-5cx6ww-crs-cni created
clusterresourceset.addons.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww-crs-cni created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww created
machinedeployment.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww-kcp created
nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-ntnx-client-5cx6ww-mt-0 created

  STEP: Checking cluster condition for credentials is set to false @ 04/29/24 11:22:22.799
  STEP: Creating secret using e2e credentials @ 04/29/24 11:22:52.879
  STEP: Checking cluster credential condition is true @ 04/29/24 11:22:52.887
  STEP: Checking cluster prism client init condition is true @ 04/29/24 11:23:02.922
  STEP: PASSED! @ 04/29/24 11:23:02.937
  STEP: Dumping logs from the "cluster-ntnx-client-5cx6ww" workload cluster @ 04/29/24 11:23:02.937
Failed to get logs for Machine cluster-ntnx-client-5cx6ww-kcp-7ghxs, Cluster cluster-ntnx-client-idmolu/cluster-ntnx-client-5cx6ww: error creating container exec: Error response from daemon: No such container: cluster-ntnx-client-5cx6ww-kcp-7ghxs
Failed to get logs for Machine cluster-ntnx-client-5cx6ww-wmd-5c4f658dc8xcqgd4-hj4tw, Cluster cluster-ntnx-client-idmolu/cluster-ntnx-client-5cx6ww: error creating container exec: Error response from daemon: No such container: cluster-ntnx-client-5cx6ww-wmd-5c4f658dc8xcqgd4-hj4tw
  STEP: Dumping all the Cluster API resources in the "cluster-ntnx-client-idmolu" namespace @ 04/29/24 11:23:03
  STEP: Deleting cluster cluster-ntnx-client-idmolu/cluster-ntnx-client-5cx6ww @ 04/29/24 11:23:03.131
  STEP: Deleting cluster cluster-ntnx-client-5cx6ww @ 04/29/24 11:23:03.138
  INFO: Waiting for the Cluster cluster-ntnx-client-idmolu/cluster-ntnx-client-5cx6ww to be deleted
  STEP: Waiting for cluster cluster-ntnx-client-5cx6ww to be deleted @ 04/29/24 11:23:03.142
  STEP: Deleting namespace used for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:23:53.184
  INFO: Deleting namespace cluster-ntnx-client-idmolu
• [92.174 seconds]
------------------------------
Nutanix client Create a cluster with invalid credentials (should fail) [capx-feature-test, nutanix-client, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/nutanix_client_test.go:246
  STEP: Creating a namespace for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:23:54.323
  INFO: Creating namespace cluster-ntnx-client-sthwh8
  INFO: Creating event watcher for namespace "cluster-ntnx-client-sthwh8"
  STEP: Creating secret with invalid credentials @ 04/29/24 11:23:54.362
  STEP: Creating a workload cluster @ 04/29/24 11:23:54.376
  INFO: clusterctl config cluster cluster-ntnx-client-wkht7n --infrastructure (default) --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor no-secret
configmap/user-ca-bundle created
configmap/cni-cluster-ntnx-client-wkht7n-crs-cni created
clusterresourceset.addons.cluster.x-k8s.io/cluster-ntnx-client-wkht7n-crs-cni created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-ntnx-client-wkht7n-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-ntnx-client-wkht7n created
machinedeployment.cluster.x-k8s.io/cluster-ntnx-client-wkht7n-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-ntnx-client-wkht7n-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-ntnx-client-wkht7n-kcp created
nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-ntnx-client-wkht7n created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-ntnx-client-wkht7n-mt-0 created

  STEP: Checking cluster credential condition is true @ 04/29/24 11:23:54.774
  STEP: Checking cluster prism client init condition is false @ 04/29/24 11:24:34.903
  STEP: PASSED! @ 04/29/24 11:24:34.917
  STEP: Dumping logs from the "cluster-ntnx-client-wkht7n" workload cluster @ 04/29/24 11:24:34.917
Failed to get logs for Machine cluster-ntnx-client-wkht7n-wmd-647949d78fxd499d-c7sht, Cluster cluster-ntnx-client-sthwh8/cluster-ntnx-client-wkht7n: error creating container exec: Error response from daemon: No such container: cluster-ntnx-client-wkht7n-wmd-647949d78fxd499d-c7sht
  STEP: Dumping all the Cluster API resources in the "cluster-ntnx-client-sthwh8" namespace @ 04/29/24 11:24:34.962
  STEP: Deleting cluster cluster-ntnx-client-sthwh8/cluster-ntnx-client-wkht7n @ 04/29/24 11:24:35.079
  STEP: Deleting cluster cluster-ntnx-client-wkht7n @ 04/29/24 11:24:35.085
  INFO: Waiting for the Cluster cluster-ntnx-client-sthwh8/cluster-ntnx-client-wkht7n to be deleted
  STEP: Waiting for cluster cluster-ntnx-client-wkht7n to be deleted @ 04/29/24 11:24:35.088
  STEP: Deleting namespace used for hosting the "cluster-ntnx-client" test spec @ 04/29/24 11:24:45.102
  INFO: Deleting namespace cluster-ntnx-client-sthwh8
• [51.886 seconds]
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[SynchronizedAfterSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:171
  STEP: Dumping logs from the bootstrap cluster @ 04/29/24 11:24:45.128
Failed to get logs for the bootstrap cluster node test-c64f1l-control-plane: exit status 1
  STEP: Tearing down the management cluster @ 04/29/24 11:24:45.253
[SynchronizedAfterSuite] PASSED [1.380 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.006 seconds]
------------------------------

Ran 4 of 32 Specs in 380.171 seconds
SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 28 Skipped
You're using deprecated Ginkgo functionality:
=============================================
  --ginkgo.always-emit-ginkgo-writer is deprecated  - use -v instead, or one of Ginkgo's machine-readable report formats to get GinkgoWriter output for passing specs.
  --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.
  --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

PASS

Ginkgo ran 1 suite in 6m25.292658125s
Test Suite Passed

However, the tests with ccm label also failed for me

$ LABEL_FILTERS=ccm make test-e2e-calico
CNI="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml" /Library/Developer/CommandLineTools/usr/bin/make test-e2e
KO_DOCKER_REPO=ko.local /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ko-v0.11.2 build -B --platform=linux/amd64 -t e2e -L .
2024/04/29 11:35:24 Using base gcr.io/distroless/static:nonroot@sha256:e9ac71e2b8e279a8372741b7a0293afda17650d926900233ec3a7b2b7c22a246 for github.com/nutanix-cloud-native/cluster-api-provider-nutanix
2024/04/29 11:35:25 Building github.com/nutanix-cloud-native/cluster-api-provider-nutanix for linux/amd64
2024/04/29 11:35:27 Loading ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
2024/04/29 11:35:30 Loaded ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
2024/04/29 11:35:30 Adding tag e2e
2024/04/29 11:35:30 Added tag e2e
ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
docker tag ko.local/cluster-api-provider-nutanix:e2e ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/base > templates/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/csi > templates/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/ccm > templates/cluster-template-ccm.yaml
mkdir -p /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts
NUTANIX_LOG_LEVEL=debug /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ginkgo-v2.1.4 -v \
		--trace \
		--progress \
		--tags=e2e \
		--label-filter="!only-for-validation && ccm" \
		--skip=""clusterctl-Upgrade"" \
		--nodes=1 \
		--no-color=false \
		--output-dir="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		--junit-report="junit.e2e_suite.1.xml" \
		--timeout="24h" \
		--always-emit-ginkgo-writer \
		 ./test/e2e -- \
		-e2e.artifacts-folder="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		-e2e.config="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" \
		-e2e.skip-resource-cleanup=false \
		-e2e.use-existing-cluster=false
Running Suite: capx-e2e - /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e
========================================================================================================================
Random Seed: 1714383338

Will run 1 of 32 specs
------------------------------
[SynchronizedBeforeSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:119
  STEP: Initializing a runtime.Scheme with all the GVK relevant for this test @ 04/29/24 11:35:42.374
  STEP: Loading the e2e test configuration from "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" @ 04/29/24 11:35:42.374
  STEP: Creating a clusterctl local repository into "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" @ 04/29/24 11:35:42.375
  STEP: Reading the ClusterResourceSet manifest /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml @ 04/29/24 11:35:42.375
  STEP: Setting up the bootstrap cluster @ 04/29/24 11:35:55.162
  STEP: Creating the bootstrap cluster @ 04/29/24 11:35:55.162
  INFO: Creating a kind cluster with name "test-yf3in3"
Creating cluster "test-yf3in3" ...
 • Ensuring node image (kindest/node:v1.23.6) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.23.6) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
  INFO: The kubeconfig file for the kind cluster is /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind1217648974
  INFO: Loading image: "ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e"
  INFO: Image ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1 is present in local container image cache
  STEP: Initializing the bootstrap cluster @ 04/29/24 11:36:11.522
  INFO: clusterctl init --config /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind1217648974 --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix
  INFO: Waiting for provider controllers to be running
  STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available @ 04/29/24 11:37:17.641
  INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6d4987b5bf-r5kf6, container manager
  STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available @ 04/29/24 11:37:17.652
  INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-67c7668b94-76svz, container manager
  STEP: Waiting for deployment capi-system/capi-controller-manager to be available @ 04/29/24 11:37:17.659
  INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-86464d4bfd-bp79t, container manager
  STEP: Waiting for deployment capx-system/capx-controller-manager to be available @ 04/29/24 11:37:17.67
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-g98fx, container kube-rbac-proxy
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-g98fx, container manager
[SynchronizedBeforeSuite] PASSED [95.684 seconds]
------------------------------
SSSSSSS
------------------------------
Nutanix flavor CCM Create a cluster with Nutanix CCM [capx-feature-test, ccm, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:68
  STEP: Creating a namespace for hosting the "cluster-ccm" test spec @ 04/29/24 11:37:19.335
  INFO: Creating namespace cluster-ccm-ibxycw
  INFO: Creating event watcher for namespace "cluster-ccm-ibxycw"
  STEP: Creating a workload cluster @ 04/29/24 11:37:19.371
  INFO: Creating the workload cluster with name "cluster-ccm-8x9b4n" using the "ccm" template (Kubernetes v1.29.2, 1 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster cluster-ccm-8x9b4n --infrastructure (default) --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor ccm
  INFO: Applying the cluster template yaml to the cluster
configmap/user-ca-bundle created
configmap/cni-cluster-ccm-8x9b4n-crs-cni created
configmap/nutanix-ccm created
secret/cluster-ccm-8x9b4n created
secret/nutanix-ccm-secret created
clusterresourceset.addons.cluster.x-k8s.io/cluster-ccm-8x9b4n-crs-cni created
clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-ccm-8x9b4n-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-ccm-8x9b4n created
machinedeployment.cluster.x-k8s.io/cluster-ccm-8x9b4n-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-ccm-8x9b4n-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-ccm-8x9b4n-kcp created
nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-ccm-8x9b4n created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-ccm-8x9b4n-mt-0 created

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 04/29/24 11:37:20.589
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by cluster-ccm-ibxycw/cluster-ccm-8x9b4n-kcp to be provisioned
  STEP: Waiting for one control plane node to exist @ 04/29/24 11:37:40.645
  INFO: Waiting for control plane to be ready
  INFO: Waiting for control plane cluster-ccm-ibxycw/cluster-ccm-8x9b4n-kcp to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 04/29/24 11:38:30.732
  STEP: Checking all the control plane machines are in the expected failure domains @ 04/29/24 11:39:40.821
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 04/29/24 11:39:40.853
  STEP: Checking all the machines controlled by cluster-ccm-8x9b4n-wmd are in the "<None>" failure domain @ 04/29/24 11:39:40.863
  INFO: Waiting for the machine pools to be provisioned
  STEP: Fetching workload proxy @ 04/29/24 11:39:40.894
  STEP: Checking if nodes have correct CCM labels @ 04/29/24 11:39:40.921
  [FAILED] in [It] - /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:92 @ 04/29/24 11:39:41.702
  STEP: Dumping logs from the "cluster-ccm-8x9b4n" workload cluster @ 04/29/24 11:39:41.702
Failed to get logs for Machine cluster-ccm-8x9b4n-kcp-2q97q, Cluster cluster-ccm-ibxycw/cluster-ccm-8x9b4n: error creating container exec: Error response from daemon: No such container: cluster-ccm-8x9b4n-kcp-2q97q
Failed to get logs for Machine cluster-ccm-8x9b4n-wmd-f477f6d4xgc6ql-pvc4v, Cluster cluster-ccm-ibxycw/cluster-ccm-8x9b4n: error creating container exec: Error response from daemon: No such container: cluster-ccm-8x9b4n-wmd-f477f6d4xgc6ql-pvc4v
  STEP: Dumping all the Cluster API resources in the "cluster-ccm-ibxycw" namespace @ 04/29/24 11:39:41.79
  STEP: Deleting cluster cluster-ccm-ibxycw/cluster-ccm-8x9b4n @ 04/29/24 11:39:41.906
  STEP: Deleting cluster cluster-ccm-8x9b4n @ 04/29/24 11:39:41.912
  INFO: Waiting for the Cluster cluster-ccm-ibxycw/cluster-ccm-8x9b4n to be deleted
  STEP: Waiting for cluster cluster-ccm-8x9b4n to be deleted @ 04/29/24 11:39:41.916
  STEP: Deleting namespace used for hosting the "cluster-ccm" test spec @ 04/29/24 11:41:52.033
  INFO: Deleting namespace cluster-ccm-ibxycw
• [FAILED] [273.991 seconds]
Nutanix flavor CCM [It] Create a cluster with Nutanix CCM [capx-feature-test, ccm, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:68

  [FAILED] Expected
      <string>:
  to match keys: {
  missing expected key node.kubernetes.io/instance-type
  }

  In [It] at: /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:92 @ 04/29/24 11:39:41.702

  Full Stack Trace
    github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e.init.func4.3()
    	/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:92 +0x434
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[SynchronizedAfterSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:171
  STEP: Dumping logs from the bootstrap cluster @ 04/29/24 11:41:52.052
Failed to get logs for the bootstrap cluster node test-yf3in3-control-plane: exit status 1
  STEP: Tearing down the management cluster @ 04/29/24 11:41:52.181
[SynchronizedAfterSuite] PASSED [0.621 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.003 seconds]
------------------------------

Summarizing 1 Failure:
  [FAIL] Nutanix flavor CCM [It] Create a cluster with Nutanix CCM [capx-feature-test, ccm, slow, network]
  /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:92

Ran 1 of 32 Specs in 370.297 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 31 Skipped
You're using deprecated Ginkgo functionality:
=============================================
  --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.
  --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.
  --ginkgo.always-emit-ginkgo-writer is deprecated  - use -v instead, or one of Ginkgo's machine-readable report formats to get GinkgoWriter output for passing specs.

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

--- FAIL: TestE2E (370.30s)
FAIL

Ginkgo ran 1 suite in 6m14.552602375s

Test Suite Failed
make[1]: *** [test-e2e] Error 1
make: *** [test-e2e-calico] Error 2

@thunderboltsid
Copy link
Contributor Author

CCM tests seem to pass on the release-v1.2 branch locally.

$ LABEL_FILTERS=ccm make test-e2e-calico
CNI="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml" /Library/Developer/CommandLineTools/usr/bin/make test-e2e
KO_DOCKER_REPO=ko.local /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ko-v0.11.2 build -B --platform=linux/amd64 -t e2e -L .
2024/04/29 16:42:20 Using base gcr.io/distroless/static:nonroot@sha256:e9ac71e2b8e279a8372741b7a0293afda17650d926900233ec3a7b2b7c22a246 for github.com/nutanix-cloud-native/cluster-api-provider-nutanix
2024/04/29 16:42:21 Building github.com/nutanix-cloud-native/cluster-api-provider-nutanix for linux/amd64
2024/04/29 16:42:23 Loading ko.local/cluster-api-provider-nutanix:8ca6f88bef30f8e5d0901b322aa4b5da860bd1b295c4567e349df632f6516165
2024/04/29 16:42:25 Loaded ko.local/cluster-api-provider-nutanix:8ca6f88bef30f8e5d0901b322aa4b5da860bd1b295c4567e349df632f6516165
2024/04/29 16:42:25 Adding tag e2e
2024/04/29 16:42:25 Added tag e2e
ko.local/cluster-api-provider-nutanix:8ca6f88bef30f8e5d0901b322aa4b5da860bd1b295c4567e349df632f6516165
docker tag ko.local/cluster-api-provider-nutanix:e2e ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/base > templates/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/csi > templates/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/ccm > templates/cluster-template-ccm.yaml
mkdir -p /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts
NUTANIX_LOG_LEVEL=debug /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ginkgo-v2.1.4 -v \
		--trace \
		--progress \
		--tags=e2e \
		--label-filter="!only-for-validation && ccm" \
		--skip=""clusterctl-Upgrade"" \
		--nodes=1 \
		--no-color=false \
		--output-dir="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		--junit-report="junit.e2e_suite.1.xml" \
		--timeout="24h" \
		--always-emit-ginkgo-writer \
		 ./test/e2e -- \
		-e2e.artifacts-folder="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		-e2e.config="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" \
		-e2e.skip-resource-cleanup=false \
		-e2e.use-existing-cluster=false
Running Suite: capx-e2e - /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e
========================================================================================================================
Random Seed: 1714401752

Will run 1 of 32 specs
------------------------------
[SynchronizedBeforeSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:119
  STEP: Initializing a runtime.Scheme with all the GVK relevant for this test @ 04/29/24 16:42:36.421
  STEP: Loading the e2e test configuration from "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" @ 04/29/24 16:42:36.421
  STEP: Creating a clusterctl local repository into "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" @ 04/29/24 16:42:36.423
  STEP: Reading the ClusterResourceSet manifest /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml @ 04/29/24 16:42:36.423
  STEP: Setting up the bootstrap cluster @ 04/29/24 16:42:49.118
  STEP: Creating the bootstrap cluster @ 04/29/24 16:42:49.118
  INFO: Creating a kind cluster with name "test-82bozt"
Creating cluster "test-82bozt" ...
 • Ensuring node image (kindest/node:v1.23.6) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.23.6) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
  INFO: The kubeconfig file for the kind cluster is /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind2552714282
  INFO: Loading image: "ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e"
  INFO: Image ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1 is present in local container image cache
  STEP: Initializing the bootstrap cluster @ 04/29/24 16:43:04.573
  INFO: clusterctl init --config /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind2552714282 --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix
  INFO: Waiting for provider controllers to be running
  STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available @ 04/29/24 16:44:06.854
  INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6d4987b5bf-scpq8, container manager
  STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available @ 04/29/24 16:44:06.865
  INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-67c7668b94-7dkg6, container manager
  STEP: Waiting for deployment capi-system/capi-controller-manager to be available @ 04/29/24 16:44:06.875
  INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-86464d4bfd-rr64b, container manager
  STEP: Waiting for deployment capx-system/capx-controller-manager to be available @ 04/29/24 16:44:06.882
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-qq79g, container kube-rbac-proxy
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-qq79g, container manager
[SynchronizedBeforeSuite] PASSED [90.850 seconds]
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
Nutanix flavor CCM Create a cluster with Nutanix CCM [capx-feature-test, ccm, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:68
  STEP: Creating a namespace for hosting the "cluster-ccm" test spec @ 04/29/24 16:44:08.296
  INFO: Creating namespace cluster-ccm-mrj0nh
  INFO: Creating event watcher for namespace "cluster-ccm-mrj0nh"
  STEP: Creating a workload cluster @ 04/29/24 16:44:08.323
  INFO: Creating the workload cluster with name "cluster-ccm-wzxvrf" using the "ccm" template (Kubernetes v1.28.7, 1 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster cluster-ccm-wzxvrf --infrastructure (default) --kubernetes-version v1.28.7 --control-plane-machine-count 1 --worker-machine-count 1 --flavor ccm
  INFO: Applying the cluster template yaml to the cluster
configmap/user-ca-bundle created
configmap/cni-cluster-ccm-wzxvrf-crs-cni created
configmap/nutanix-ccm created
secret/cluster-ccm-wzxvrf created
secret/nutanix-ccm-secret created
clusterresourceset.addons.cluster.x-k8s.io/cluster-ccm-wzxvrf-crs-cni created
clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-ccm-wzxvrf-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-ccm-wzxvrf created
machinedeployment.cluster.x-k8s.io/cluster-ccm-wzxvrf-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-ccm-wzxvrf-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-ccm-wzxvrf-kcp created
nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-ccm-wzxvrf created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-ccm-wzxvrf-mt-0 created

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 04/29/24 16:44:09.564
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by cluster-ccm-mrj0nh/cluster-ccm-wzxvrf-kcp to be provisioned
  STEP: Waiting for one control plane node to exist @ 04/29/24 16:44:29.615
  INFO: Waiting for control plane to be ready
  INFO: Waiting for control plane cluster-ccm-mrj0nh/cluster-ccm-wzxvrf-kcp to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 04/29/24 16:45:19.681
  STEP: Checking all the control plane machines are in the expected failure domains @ 04/29/24 16:45:29.692
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 04/29/24 16:45:29.714
  STEP: Checking all the machines controlled by cluster-ccm-wzxvrf-wmd are in the "<None>" failure domain @ 04/29/24 16:46:19.793
  INFO: Waiting for the machine pools to be provisioned
  STEP: Fetching workload proxy @ 04/29/24 16:46:19.828
  STEP: Checking if nodes have correct CCM labels @ 04/29/24 16:46:19.854
  STEP: PASSED! @ 04/29/24 16:46:20.626
  STEP: Dumping logs from the "cluster-ccm-wzxvrf" workload cluster @ 04/29/24 16:46:20.626
Failed to get logs for Machine cluster-ccm-wzxvrf-kcp-nxsr4, Cluster cluster-ccm-mrj0nh/cluster-ccm-wzxvrf: error creating container exec: Error response from daemon: No such container: cluster-ccm-wzxvrf-kcp-nxsr4
Failed to get logs for Machine cluster-ccm-wzxvrf-wmd-699bd9d48bxdp4sd-fbpbh, Cluster cluster-ccm-mrj0nh/cluster-ccm-wzxvrf: error creating container exec: Error response from daemon: No such container: cluster-ccm-wzxvrf-wmd-699bd9d48bxdp4sd-fbpbh
  STEP: Dumping all the Cluster API resources in the "cluster-ccm-mrj0nh" namespace @ 04/29/24 16:46:20.695
  STEP: Deleting cluster cluster-ccm-mrj0nh/cluster-ccm-wzxvrf @ 04/29/24 16:46:20.824
  STEP: Deleting cluster cluster-ccm-wzxvrf @ 04/29/24 16:46:20.831
  INFO: Waiting for the Cluster cluster-ccm-mrj0nh/cluster-ccm-wzxvrf to be deleted
  STEP: Waiting for cluster cluster-ccm-wzxvrf to be deleted @ 04/29/24 16:46:20.836
  STEP: Deleting namespace used for hosting the "cluster-ccm" test spec @ 04/29/24 16:46:50.862
  INFO: Deleting namespace cluster-ccm-mrj0nh
• [163.607 seconds]
------------------------------
SSSSSSSSSSS
------------------------------
[SynchronizedAfterSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:171
  STEP: Dumping logs from the bootstrap cluster @ 04/29/24 16:46:50.879
Failed to get logs for the bootstrap cluster node test-82bozt-control-plane: exit status 1
  STEP: Tearing down the management cluster @ 04/29/24 16:46:50.985
[SynchronizedAfterSuite] PASSED [0.583 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.004 seconds]
------------------------------

Ran 1 of 32 Specs in 255.042 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 31 Skipped
You're using deprecated Ginkgo functionality:
=============================================
  --ginkgo.always-emit-ginkgo-writer is deprecated  - use -v instead, or one of Ginkgo's machine-readable report formats to get GinkgoWriter output for passing specs.
  --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.
  --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

PASS

Ginkgo ran 1 suite in 4m19.338730875s
Test Suite Passed

@thunderboltsid
Copy link
Contributor Author

Running CCM tests locally for k8s version 1.28 also passed successfully.

@thunderboltsid
Copy link
Contributor Author

Running CCM locally w/ 1.29 also passing. Seems like it was a flake.

$ LABEL_FILTERS=ccm make test-e2e-calico
CNI="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml" /Library/Developer/CommandLineTools/usr/bin/make test-e2e
KO_DOCKER_REPO=ko.local /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ko-v0.11.2 build -B --platform=linux/amd64 -t e2e -L .
2024/04/29 16:53:39 Using base gcr.io/distroless/static:nonroot@sha256:e9ac71e2b8e279a8372741b7a0293afda17650d926900233ec3a7b2b7c22a246 for github.com/nutanix-cloud-native/cluster-api-provider-nutanix
2024/04/29 16:53:40 Building github.com/nutanix-cloud-native/cluster-api-provider-nutanix for linux/amd64
2024/04/29 16:53:43 Loading ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
2024/04/29 16:53:44 Loaded ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
2024/04/29 16:53:44 Adding tag e2e
2024/04/29 16:53:44 Added tag e2e
ko.local/cluster-api-provider-nutanix:053bea19de6c0db64ee45e55c3835df111dd8a340e6c2c69394122cdd35123ab
docker tag ko.local/cluster-api-provider-nutanix:e2e ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-ccm.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1alpha4/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/base > templates/cluster-template.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/csi > templates/cluster-template-csi.yaml
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/kustomize-v4.5.4 build templates/ccm > templates/cluster-template-ccm.yaml
mkdir -p /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts
NUTANIX_LOG_LEVEL=debug /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/hack/tools/bin/ginkgo-v2.1.4 -v \
		--trace \
		--progress \
		--tags=e2e \
		--label-filter="!only-for-validation && ccm" \
		--skip=""clusterctl-Upgrade"" \
		--nodes=1 \
		--no-color=false \
		--output-dir="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		--junit-report="junit.e2e_suite.1.xml" \
		--timeout="24h" \
		--always-emit-ginkgo-writer \
		 ./test/e2e -- \
		-e2e.artifacts-folder="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
		-e2e.config="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" \
		-e2e.skip-resource-cleanup=false \
		-e2e.use-existing-cluster=false
Running Suite: capx-e2e - /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e
========================================================================================================================
Random Seed: 1714402431

Will run 1 of 32 specs
------------------------------
[SynchronizedBeforeSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:119
  STEP: Initializing a runtime.Scheme with all the GVK relevant for this test @ 04/29/24 16:53:56.906
  STEP: Loading the e2e test configuration from "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml" @ 04/29/24 16:53:56.906
  STEP: Creating a clusterctl local repository into "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" @ 04/29/24 16:53:56.907
  STEP: Reading the ClusterResourceSet manifest /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml @ 04/29/24 16:53:56.907
  STEP: Setting up the bootstrap cluster @ 04/29/24 16:54:09.227
  STEP: Creating the bootstrap cluster @ 04/29/24 16:54:09.227
  INFO: Creating a kind cluster with name "test-bcvm3f"
Creating cluster "test-bcvm3f" ...
 • Ensuring node image (kindest/node:v1.23.6) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.23.6) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
  INFO: The kubeconfig file for the kind cluster is /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind3654022335
  INFO: Loading image: "ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e"
  INFO: Image ghcr.io/nutanix-cloud-native/cluster-api-provider-nutanix/controller:e2e is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.4.1 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.4.1 is present in local container image cache
  STEP: Initializing the bootstrap cluster @ 04/29/24 16:54:24.573
  INFO: clusterctl init --config /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind3654022335 --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix
  INFO: Waiting for provider controllers to be running
  STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available @ 04/29/24 16:55:21.72
  INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6d4987b5bf-g487p, container manager
  STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available @ 04/29/24 16:55:21.729
  INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-67c7668b94-4lvp7, container manager
  STEP: Waiting for deployment capi-system/capi-controller-manager to be available @ 04/29/24 16:55:21.736
  INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-86464d4bfd-94gr4, container manager
  STEP: Waiting for deployment capx-system/capx-controller-manager to be available @ 04/29/24 16:55:21.745
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-wngqk, container kube-rbac-proxy
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84d9fcd6f8-wngqk, container manager
[SynchronizedBeforeSuite] PASSED [85.237 seconds]
------------------------------
SSSSSSSSS
------------------------------
Nutanix flavor CCM Create a cluster with Nutanix CCM [capx-feature-test, ccm, slow, network]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/ccm_test.go:68
  STEP: Creating a namespace for hosting the "cluster-ccm" test spec @ 04/29/24 16:55:23.342
  INFO: Creating namespace cluster-ccm-wpojsf
  INFO: Creating event watcher for namespace "cluster-ccm-wpojsf"
  STEP: Creating a workload cluster @ 04/29/24 16:55:23.374
  INFO: Creating the workload cluster with name "cluster-ccm-2auaer" using the "ccm" template (Kubernetes v1.29.2, 1 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster cluster-ccm-2auaer --infrastructure (default) --kubernetes-version v1.29.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor ccm
  INFO: Applying the cluster template yaml to the cluster
configmap/user-ca-bundle created
configmap/cni-cluster-ccm-2auaer-crs-cni created
configmap/nutanix-ccm created
secret/cluster-ccm-2auaer created
secret/nutanix-ccm-secret created
clusterresourceset.addons.cluster.x-k8s.io/cluster-ccm-2auaer-crs-cni created
clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-ccm-2auaer-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-ccm-2auaer created
machinedeployment.cluster.x-k8s.io/cluster-ccm-2auaer-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-ccm-2auaer-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-ccm-2auaer-kcp created
nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-ccm-2auaer created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-ccm-2auaer-mt-0 created

  INFO: Waiting for the cluster infrastructure to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 04/29/24 16:55:24.542
  INFO: Waiting for control plane to be initialized
  INFO: Waiting for the first control plane machine managed by cluster-ccm-wpojsf/cluster-ccm-2auaer-kcp to be provisioned
  STEP: Waiting for one control plane node to exist @ 04/29/24 16:55:44.601
  INFO: Waiting for control plane to be ready
  INFO: Waiting for control plane cluster-ccm-wpojsf/cluster-ccm-2auaer-kcp to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 04/29/24 16:57:04.737
  STEP: Checking all the control plane machines are in the expected failure domains @ 04/29/24 16:57:44.786
  INFO: Waiting for the machine deployments to be provisioned
  STEP: Waiting for the workload nodes to exist @ 04/29/24 16:57:44.812
  STEP: Checking all the machines controlled by cluster-ccm-2auaer-wmd are in the "<None>" failure domain @ 04/29/24 16:57:44.82
  INFO: Waiting for the machine pools to be provisioned
  STEP: Fetching workload proxy @ 04/29/24 16:57:44.852
  STEP: Checking if nodes have correct CCM labels @ 04/29/24 16:57:44.878
  STEP: PASSED! @ 04/29/24 16:57:45.66
  STEP: Dumping logs from the "cluster-ccm-2auaer" workload cluster @ 04/29/24 16:57:45.66
Failed to get logs for Machine cluster-ccm-2auaer-kcp-4x6s2, Cluster cluster-ccm-wpojsf/cluster-ccm-2auaer: error creating container exec: Error response from daemon: No such container: cluster-ccm-2auaer-kcp-4x6s2
Failed to get logs for Machine cluster-ccm-2auaer-wmd-54666f4d97xldqfp-m9g4g, Cluster cluster-ccm-wpojsf/cluster-ccm-2auaer: error creating container exec: Error response from daemon: No such container: cluster-ccm-2auaer-wmd-54666f4d97xldqfp-m9g4g
  STEP: Dumping all the Cluster API resources in the "cluster-ccm-wpojsf" namespace @ 04/29/24 16:57:45.721
  STEP: Deleting cluster cluster-ccm-wpojsf/cluster-ccm-2auaer @ 04/29/24 16:57:45.847
  STEP: Deleting cluster cluster-ccm-2auaer @ 04/29/24 16:57:45.854
  INFO: Waiting for the Cluster cluster-ccm-wpojsf/cluster-ccm-2auaer to be deleted
  STEP: Waiting for cluster cluster-ccm-2auaer to be deleted @ 04/29/24 16:57:45.859
  STEP: Deleting namespace used for hosting the "cluster-ccm" test spec @ 04/29/24 16:58:35.903
  INFO: Deleting namespace cluster-ccm-wpojsf
• [193.795 seconds]
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[SynchronizedAfterSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:171
  STEP: Dumping logs from the bootstrap cluster @ 04/29/24 16:58:35.938
Failed to get logs for the bootstrap cluster node test-bcvm3f-control-plane: exit status 1
  STEP: Tearing down the management cluster @ 04/29/24 16:58:36.049
[SynchronizedAfterSuite] PASSED [0.580 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.003 seconds]
------------------------------

Ran 1 of 32 Specs in 279.612 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 31 Skipped
You're using deprecated Ginkgo functionality:
=============================================
  --ginkgo.always-emit-ginkgo-writer is deprecated  - use -v instead, or one of Ginkgo's machine-readable report formats to get GinkgoWriter output for passing specs.
  --ginkgo.progress is deprecated .  The functionality provided by --progress was confusing and is no longer needed.  Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs.  Or you can run with -vv to always see all node events.  Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck.
  --ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo.  This feature has proved to be more noisy than useful.  You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

PASS

Ginkgo ran 1 suite in 4m44.790780625s
Test Suite Passed

Run `make manifests`
@thunderboltsid
Copy link
Contributor Author

/test e2e-nutanix-features

1 similar comment
@thunderboltsid
Copy link
Contributor Author

/test e2e-nutanix-features

Copy link

✅ None of your dependencies violate policy!

@thunderboltsid
Copy link
Contributor Author

/test e2e-nutanix-features
/test e2e-k8s-upgrade

@thunderboltsid
Copy link
Contributor Author

/test e2e-k8s-upgrade

1 similar comment
@thunderboltsid
Copy link
Contributor Author

/test e2e-k8s-upgrade

@nutanix-cn-prow-bot
Copy link

@thunderboltsid: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-capx-controller-upgrade 8b211ed link false /test e2e-capx-controller-upgrade
ci/prow/e2e-k8s-upgrade a2e74d2 link false /test e2e-k8s-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@thunderboltsid
Copy link
Contributor Author

Testing the upgrade tests locally, they passed for version 1.27 -> 1.28 but failed for 1.28 -> 1.29 with the same error

• [FAILED] [1391.484 seconds]
When upgrading a workload cluster with a single control plane machine [It] Should create and upgrade a workload cluster and eventually run kubetest [cluster-upgrade-conformance, slow, network]
/Users/sid.shukla/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.5/e2e/cluster_upgrade.go:118

  [FAILED] Timed out after 1200.001s.
  Timed out waiting for all control-plane machines in Cluster k8s-upgrade-and-conformance-you1z0/k8s-upgrade-and-conformance-dakzon to be upgraded to kubernetes version v1.29.2
  Error: function returned error: old nodes remain
      <*fmt.wrapError | 0x140015563c0>: {
          msg: "function returned error: old nodes remain",
          err: <*errors.fundamental | 0x1400144ab10>{
              msg: "old nodes remain",
              stack: [0x1019de80d, 0x1007f8bd0, 0x1007f8064, 0x10182a6e4, 0x10182b428, 0x10182939c, 0x1019de544, 0x1019d6a88, 0x101ea5068, 0x100b2fe24, 0x100b3fad8, 0x100789c84],
          },
      }

The new control plane node that came up never got to a Ready state. Looking at kubelet logs

$ journalctl -u kubelet
Hint: You are currently not seeing messages from other users and the system.
      Users in groups 'adm', 'systemd-journal' can see all messages.
      Pass -q to turn off this notice.
-- No entries --
$ sudo journalctl -u kubelet
Apr 30 10:17:08 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: Started kubelet: The Kubernetes Node Agent.
Apr 30 10:17:09 k8s-upgrade-and-conformance-dakzon-kcp-w42xl kubelet[752]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Apr 30 10:17:09 k8s-upgrade-and-conformance-dakzon-kcp-w42xl kubelet[752]: I0430 10:17:09.210385     752 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Apr 30 10:17:09 k8s-upgrade-and-conformance-dakzon-kcp-w42xl kubelet[752]: E0430 10:17:09.210549     752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml>
Apr 30 10:17:09 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Apr 30 10:17:09 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: Started kubelet: The Kubernetes Node Agent.
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl kubelet[1112]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl kubelet[1112]: I0430 10:17:19.317434    1112 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl kubelet[1112]: E0430 10:17:19.317580    1112 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yam>
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Apr 30 10:17:19 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 30 10:17:22 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Apr 30 10:17:23 k8s-upgrade-and-conformance-dakzon-kcp-w42xl systemd[1]: Started kubelet: The Kubernetes Node Agent

@thunderboltsid
Copy link
Contributor Author

Closing this PR for now. For 1.2.x we will focus on testing w/ Kubernetes v1.28

@thunderboltsid thunderboltsid deleted the jira/krbn-8158 branch May 3, 2024 16:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant