Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e2e: retry when a "transport is closing" error is hit #2615

Merged
merged 1 commit into from
Nov 17, 2021

Conversation

nixpanic
Copy link
Member

@nixpanic nixpanic commented Nov 3, 2021

There have been occasional CI job failures due to "transport is closing"
errors. Adding this error to the isRetryableAPIError() function should
make sure to retry the request until the connection is restored.

Fixes: #2613


Show available bot commands

These commands are normally not required, but in case of issues, leave any of
the following bot commands in an otherwise empty comment in this PR:

  • /retest ci/centos/<job-name>: retest the <job-name> after unrelated
    failure (please report the failure too!)
  • /retest all: run this in case the CentOS CI failed to start/report any test
    progress or results

@mergify mergify bot added the component/testing Additional test cases or CI work label Nov 3, 2021
e2e/errors.go Show resolved Hide resolved
@nixpanic
Copy link
Member Author

@Mergifyio rebase

@mergify
Copy link
Contributor

mergify bot commented Nov 11, 2021

rebase

✅ Branch has been successfully rebased

@nixpanic
Copy link
Member Author

/retest ci/centos/mini-e2e/k8s-1.21

@nixpanic
Copy link
Member Author

/retest ci/centos/mini-e2e/k8s-1.21

https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/mini-e2e_k8s-1.21/detail/mini-e2e_k8s-1.21/1734/pipeline

�[1mSTEP�[0m: create a PVC and bind it to an app using rbd-nbd mounter with encryption
Nov 11 08:06:48.506: INFO: waiting for kubectl (delete -f args []) to finish
Nov 11 08:06:48.506: INFO: Running '/usr/bin/kubectl --server=https://192.168.39.199:8443 --kubeconfig=/root/.kube/config --namespace=cephcsi-e2e-18396e81 delete -f -'
Nov 11 08:06:48.637: INFO: stderr: "warning: deleting cluster-scoped resources, not scoped to the provided namespace\n"
Nov 11 08:06:48.637: INFO: stdout: "storageclass.storage.k8s.io \"csi-rbd-sc\" deleted\n"
Nov 11 08:06:48.643: INFO: ExecWithOptions {Command:[/bin/sh -c ceph fsid] Namespace:rook-ceph PodName:rook-ceph-tools-7467d8bf8-hp8p6 ContainerName:rook-ceph-tools Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Nov 11 08:06:48.643: INFO: >>> kubeConfig: /root/.kube/config
Nov 11 08:06:50.890: INFO: Waiting up to &PersistentVolumeClaim{ObjectMeta:{rbd-pvc  rbd-1318    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*csi-rbd-sc,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} to be in Bound state
Nov 11 08:06:50.890: INFO: waiting for PVC rbd-pvc (0 seconds elapsed)
Nov 11 08:06:52.893: INFO: waiting for PVC rbd-pvc (2 seconds elapsed)
Nov 11 08:06:52.900: INFO: Waiting for PV pvc-4ecad1a4-6264-43e9-899e-4a320657cba4 to bind to PVC rbd-pvc
Nov 11 08:06:52.900: INFO: Waiting up to timeout=10m0s for PersistentVolumeClaims [rbd-pvc] to have phase Bound
Nov 11 08:06:52.902: INFO: PersistentVolumeClaim rbd-pvc found and phase=Bound (2.473521ms)
Nov 11 08:06:52.902: INFO: Waiting up to 10m0s for PersistentVolume pvc-4ecad1a4-6264-43e9-899e-4a320657cba4 to have phase Bound
Nov 11 08:06:52.904: INFO: PersistentVolume pvc-4ecad1a4-6264-43e9-899e-4a320657cba4 found and phase=Bound (2.312858ms)
Nov 11 08:06:52.916: INFO: Waiting up to csi-rbd-demo-pod to be in Running state
Nov 11 08:07:12.934: INFO: ExecWithOptions {Command:[/bin/sh -c rbd image-meta get replicapool/csi-vol-5340536c-42c6-11ec-9142-fa2e771ed721 rbd.csi.ceph.com/encrypted] Namespace:rook-ceph PodName:rook-ceph-tools-7467d8bf8-hp8p6 ContainerName:rook-ceph-tools Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Nov 11 08:07:12.934: INFO: >>> kubeConfig: /root/.kube/config
Nov 11 08:07:13.948: INFO: ExecWithOptions {Command:[/bin/sh -c lsblk -o TYPE,MOUNTPOINT | grep '/var/lib/www/html' | awk '{print $1}'] Namespace:rbd-1318 PodName:csi-rbd-demo-pod ContainerName:web-server Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Nov 11 08:07:13.948: INFO: >>> kubeConfig: /root/.kube/config
Nov 11 08:07:14.098: FAIL: failed to validate encrypted pvc with error  not equal to crypt

Seems to be #2610

@nixpanic
Copy link
Member Author

@Mergifyio rebase

@mergify
Copy link
Contributor

mergify bot commented Nov 16, 2021

rebase

✅ Branch has been successfully rebased

@nixpanic nixpanic force-pushed the issue/2613 branch 2 times, most recently from 7ee30ce to 185c9f5 Compare November 16, 2021 14:28
@nixpanic
Copy link
Member Author

/retest ci/centos/mini-e2e-helm/k8s-1.22

@nixpanic
Copy link
Member Author

/retest ci/centos/mini-e2e-helm/k8s-1.22

Failed with #2264

@nixpanic
Copy link
Member Author

@Mergifyio rebase

There have been occasional CI job failures due to "transport is closing"
errors. Adding this error to the isRetryableAPIError() function should
make sure to retry the request until the connection is restored.

Fixes: ceph#2613
Signed-off-by: Niels de Vos <ndevos@redhat.com>
@mergify
Copy link
Contributor

mergify bot commented Nov 17, 2021

rebase

✅ Branch has been successfully rebased

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/testing Additional test cases or CI work
Projects
None yet
Development

Successfully merging this pull request may close these issues.

e2e: spurious failure error while getting pvc: rpc error: code = Unavailable desc = transport is closing
3 participants