Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(controller): Use v3 VM VG Reference for v4 detach VG From VM #473

Merged
merged 1 commit into from
Aug 14, 2024

Conversation

thunderboltsid
Copy link
Contributor

@thunderboltsid thunderboltsid commented Aug 13, 2024

AOS 6.5 does not support v4 VMM APIs but does support v4 volume group APIs.
To increase compatibility with AOS versions we will no longer use v4 GetVM call
to get the extIDs for VM and VG calls but leverage UUIDs from the v3 GetVM call
instead.

How has this been tested?
Run the CSI 3.0 E2E tests on a AOS 6.5 cluster

$ LABEL_FILTERS="csi3" make test-e2e-calico
CNI="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml" GIT_COMMIT="fdad9efe9e90ae99841ccfab3ecceb404c405fde" make test-e2e
make[1]: Entering directory '/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix'
echo "Git commit hash: fdad9efe9e90ae99841ccfab3ecceb404c405fde"
Git commit hash: fdad9efe9e90ae99841ccfab3ecceb404c405fde
KO_DOCKER_REPO=ko.local GOFLAGS="-ldflags=-X=main.gitCommitHash=fdad9efe9e90ae99841ccfab3ecceb404c405fde" ko build -B --platform=linux/amd64 -t e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde .
2024/08/14 02:55:29 Using base cgr.dev/chainguard/static:latest@sha256:5e9c88174a28c259c349f308dd661a6ec61ed5f8c72ecfaefb46cceb811b55a1 for github.com/nutanix-cloud-native/cluster-api-provider-nutanix
2024/08/14 02:55:30 Building github.com/nutanix-cloud-native/cluster-api-provider-nutanix for linux/amd64
2024/08/14 02:55:33 Loading ko.local/cluster-api-provider-nutanix:0f670d7f05aaf2a995acc3b5cccdd78cdeb579a3169ebecdff62c104e2c17ade
2024/08/14 02:55:36 Loaded ko.local/cluster-api-provider-nutanix:0f670d7f05aaf2a995acc3b5cccdd78cdeb579a3169ebecdff62c104e2c17ade
2024/08/14 02:55:36 Adding tag e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde
2024/08/14 02:55:36 Added tag e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde
ko.local/cluster-api-provider-nutanix:0f670d7f05aaf2a995acc3b5cccdd78cdeb579a3169ebecdff62c104e2c17ade
docker tag ko.local/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi3 --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi3.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-failure-domains --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-failure-domains.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-clusterclass --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-clusterclass.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-clusterclass --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/clusterclass-nutanix-quick-start.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-topology.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1.3.5/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1.3.5/cluster-template.yaml
kustomize build templates/base > templates/cluster-template.yaml
kustomize build templates/csi > templates/cluster-template-csi.yaml
kustomize build templates/csi3 > templates/cluster-template-csi3.yaml
kustomize build templates/clusterclass > templates/cluster-template-clusterclass.yaml
kustomize build templates/topology > templates/cluster-template-topology.yaml
echo "Image tag for E2E test is e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde"
Image tag for E2E test is e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde
LOCAL_PROVIDER_VERSION=v1.5.99 \
	MANAGER_IMAGE=harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde \
	envsubst < /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml.tmp
docker tag ko.local/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde
docker push harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde
The push refers to repository [harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix]
d15f13965f14: Pushed
ffe56a1c5f38: Layer already exists
935a6850a620: Layer already exists
e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde: digest: sha256:c58d25bbed29425fbd90b1b4b773a75c229724da37d5b101e9f2bb61fd05fe0c size: 946
mkdir -p /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts
NUTANIX_LOG_LEVEL=debug ginkgo -v \
	--trace \
	--tags=e2e \
	--label-filter="!only-for-validation && csi30" \
	--skip="" \
	--focus="" \
	--nodes=1 \
	--no-color=false \
	--output-dir="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
	--junit-report="junit.e2e_suite.1.xml" \
	--timeout="24h" \
	 \
	./test/e2e -- \
	-e2e.artifacts-folder="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
	-e2e.config="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml.tmp" \
	-e2e.skip-resource-cleanup=false \
	-e2e.use-existing-cluster=false
Running Suite: capx-e2e - /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e
========================================================================================================================
Random Seed: 1723596956

Will run 1 of 101 specs
------------------------------
[SynchronizedBeforeSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:63
  STEP: Initializing a runtime.Scheme with all the GVK relevant for this test @ 08/14/24 02:56:00.752
  STEP: Loading the e2e test configuration from "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml.tmp" @ 08/14/24 02:56:00.753
  STEP: Creating a clusterctl local repository into "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" @ 08/14/24 02:56:00.754
  STEP: Reading the ClusterResourceSet manifest /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml @ 08/14/24 02:56:00.754
  STEP: Setting up the bootstrap cluster @ 08/14/24 02:56:08.515
  STEP: Creating the bootstrap cluster @ 08/14/24 02:56:08.515
  INFO: Creating a kind cluster with name "test-m8t06e"
Creating cluster "test-m8t06e" ...
 • Ensuring node image (kindest/node:v1.30.2) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.30.2) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
  INFO: The kubeconfig file for the kind cluster is /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind810191237
  INFO: Loading image: "harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde"
  INFO: Image harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-fdad9efe9e90ae99841ccfab3ecceb404c405fde is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.6.2"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.6.2 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.6.2"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.6.2 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.6.2"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.6.2 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.7.3"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.7.3 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.7.3"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.7.3 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.7.3"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.7.3 is present in local container image cache
  STEP: Initializing the bootstrap cluster @ 08/14/24 02:56:26.814
  INFO: clusterctl init --config /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind810191237 --wait-providers --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix
  INFO: Waiting for provider controllers to be running
  STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available @ 08/14/24 02:57:05.106
  INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-659b5fb778-sq27l, container manager
  STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available @ 08/14/24 02:57:05.236
  INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-696879d594-nwf2b, container manager
  STEP: Waiting for deployment capi-system/capi-controller-manager to be available @ 08/14/24 02:57:05.245
  INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-6f69847fd8-srgg6, container manager
  STEP: Waiting for deployment capx-system/capx-controller-manager to be available @ 08/14/24 02:57:05.255
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84f4ffbcdd-zqgqn, container kube-rbac-proxy
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-84f4ffbcdd-zqgqn, container manager
[SynchronizedBeforeSuite] PASSED [64.768 seconds]
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Nutanix flavor CSI Create a cluster with Nutanix CSI 3.0 and use Nutanix Volumes to create PV and delete cluster without deleting PVC [capx-feature-test, csi, csi30]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/csi_test.go:384
  STEP: Creating a namespace for hosting the "cluster-csi" test spec @ 08/14/24 02:57:05.522
  INFO: Creating namespace cluster-csi-uvsvw9
  INFO: Creating event watcher for namespace "cluster-csi-uvsvw9"
  STEP: Creating a workload cluster @ 08/14/24 02:57:05.538
  INFO: Creating the workload cluster with name "cluster-csi-pezvyh" using the "csi3" template (Kubernetes v1.30.3, 1 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster cluster-csi-pezvyh --infrastructure (default) --kubernetes-version v1.30.3 --control-plane-machine-count 1 --worker-machine-count 1 --flavor csi3
  INFO: Creating the workload cluster with name "cluster-csi-pezvyh" from the provided yaml
  INFO: Applying the cluster template yaml of cluster cluster-csi-uvsvw9/cluster-csi-pezvyh
Running kubectl apply --kubeconfig /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind810191237 -f -
stdout:
configmap/cluster-csi-pezvyh-pc-trusted-ca-bundle created
configmap/nutanix-ccm created
configmap/nutanix-csi created
configmap/cni-cluster-csi-pezvyh-crs-cni created
secret/cluster-csi-pezvyh created
secret/nutanix-ccm-secret created
secret/nutanix-csi-secret created
clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created
clusterresourceset.addons.cluster.x-k8s.io/nutanix-csi-crs created
clusterresourceset.addons.cluster.x-k8s.io/cluster-csi-pezvyh-crs-cni created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-csi-pezvyh-kcfg-0 created
cluster.cluster.x-k8s.io/cluster-csi-pezvyh created
machinedeployment.cluster.x-k8s.io/cluster-csi-pezvyh-wmd created
machinehealthcheck.cluster.x-k8s.io/cluster-csi-pezvyh-mhc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-csi-pezvyh-kcp created
nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-csi-pezvyh created
nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-csi-pezvyh-mt-0 created

  INFO: Waiting for the cluster infrastructure of cluster cluster-csi-uvsvw9/cluster-csi-pezvyh to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 08/14/24 02:57:06.288
  INFO: Waiting for control plane of cluster cluster-csi-uvsvw9/cluster-csi-pezvyh to be initialized
  INFO: Waiting for the first control plane machine managed by cluster-csi-uvsvw9/cluster-csi-pezvyh-kcp to be provisioned
  STEP: Waiting for one control plane node to exist @ 08/14/24 02:57:16.31
  INFO: Waiting for control plane of cluster cluster-csi-uvsvw9/cluster-csi-pezvyh to be ready
  INFO: Waiting for control plane cluster-csi-uvsvw9/cluster-csi-pezvyh-kcp to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 08/14/24 02:58:56.404
  STEP: Checking all the control plane machines are in the expected failure domains @ 08/14/24 02:59:06.421
  INFO: Waiting for the machine deployments of cluster cluster-csi-uvsvw9/cluster-csi-pezvyh to be provisioned
  STEP: Waiting for the workload nodes to exist @ 08/14/24 02:59:06.433
  STEP: Checking all the machines controlled by cluster-csi-pezvyh-wmd are in the "<None>" failure domain @ 08/14/24 02:59:06.438
  INFO: Waiting for the machine pools of cluster cluster-csi-uvsvw9/cluster-csi-pezvyh to be provisioned
  STEP: Fetching workload client @ 08/14/24 02:59:06.445
  STEP: Checking if CSI namespace exists @ 08/14/24 02:59:06.452
  STEP: Checking if CSI deployment exists @ 08/14/24 02:59:07.225
  STEP: Creating CSI Storage class @ 08/14/24 02:59:07.657
  STEP: Creating CSI PVC @ 08/14/24 02:59:08.038
  STEP: Creating a pod using the PVC to ensure VG is attached to the VM @ 08/14/24 02:59:08.235
  STEP: Checking if the pod is running @ 08/14/24 02:59:08.433
  STEP: Checking CSI PVC status is bound @ 08/14/24 02:59:08.433
  STEP: Dumping logs from the "cluster-csi-pezvyh" workload cluster @ 08/14/24 03:01:10.931
Failed to get logs for Machine cluster-csi-pezvyh-kcp-wb57f, Cluster cluster-csi-uvsvw9/cluster-csi-pezvyh: [error creating container exec: Error response from daemon: No such container: cluster-csi-pezvyh-kcp-wb57f, : error creating container exec: Error response from daemon: No such container: cluster-csi-pezvyh-kcp-wb57f]
Failed to get logs for Machine cluster-csi-pezvyh-wmd-8nzbd-ljpl5, Cluster cluster-csi-uvsvw9/cluster-csi-pezvyh: [error creating container exec: Error response from daemon: No such container: cluster-csi-pezvyh-wmd-8nzbd-ljpl5, : error creating container exec: Error response from daemon: No such container: cluster-csi-pezvyh-wmd-8nzbd-ljpl5]
Failed to get infrastructure logs for Cluster cluster-csi-uvsvw9/cluster-csi-pezvyh: failed to inspect container "cluster-csi-pezvyh-lb": Error response from daemon: No such container: cluster-csi-pezvyh-lb
  STEP: Dumping all the Cluster API resources in the "cluster-csi-uvsvw9" namespace @ 08/14/24 03:01:10.986
  STEP: Deleting cluster cluster-csi-uvsvw9/cluster-csi-pezvyh @ 08/14/24 03:01:11.107
  STEP: Deleting cluster cluster-csi-uvsvw9/cluster-csi-pezvyh @ 08/14/24 03:01:11.11
  INFO: Waiting for the Cluster cluster-csi-uvsvw9/cluster-csi-pezvyh to be deleted
  STEP: Waiting for cluster cluster-csi-uvsvw9/cluster-csi-pezvyh to be deleted @ 08/14/24 03:01:11.115
  STEP: Deleting namespace used for hosting the "cluster-csi" test spec @ 08/14/24 03:02:11.164
  INFO: Deleting namespace cluster-csi-uvsvw9
• [305.653 seconds]
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[SynchronizedAfterSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:118
  STEP: Dumping logs from the bootstrap cluster @ 08/14/24 03:02:11.175
  STEP: Tearing down the management cluster @ 08/14/24 03:02:11.338
[SynchronizedAfterSuite] PASSED [0.538 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.002 seconds]
------------------------------

Ran 1 of 101 Specs in 370.961 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 100 Skipped
PASS

Ginkgo ran 1 suite in 6m15.645496583s
Test Suite Passed
make[1]: Leaving directory '/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix'

Copy link

codecov bot commented Aug 13, 2024

Codecov Report

Attention: Patch coverage is 0% with 8 lines in your changes missing coverage. Please review.

Project coverage is 31.31%. Comparing base (237138a) to head (14a199f).
Report is 1 commits behind head on main.

Files Patch % Lines
controllers/helpers.go 0.00% 5 Missing ⚠️
controllers/nutanixmachine_controller.go 0.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #473      +/-   ##
==========================================
+ Coverage   31.14%   31.31%   +0.16%     
==========================================
  Files          14       14              
  Lines        1477     1469       -8     
==========================================
  Hits          460      460              
+ Misses       1017     1009       -8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

AOS 6.5 does not support v4 VMM APIs but does support v4 volume group
APIs.
@thunderboltsid thunderboltsid requested review from dkoshkin, deepakm-ntnx and jimmidyson and removed request for tuxtof August 14, 2024 02:27
Copy link
Member

@jimmidyson jimmidyson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is yet another reason we need the adapter client in NCN prism-go-client...

@thunderboltsid thunderboltsid changed the title fix(controller): Use v3 VM VG Reference for v4 detachVGFromVM fix(controller): Use v3 VM VG Reference for v4 detach VG From VM Aug 14, 2024
@thunderboltsid
Copy link
Contributor Author

This is yet another reason we need the adapter client in NCN prism-go-client...

Absolutely

@thunderboltsid thunderboltsid merged commit d1217b9 into main Aug 14, 2024
8 of 11 checks passed
@thunderboltsid thunderboltsid deleted the jira/NCN-101958 branch August 14, 2024 16:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants