-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: prow testing #41
Conversation
If for whatever reasons a repo already had a `release-tools` directory before doing a clean import of it with `git subtree`, the check used to fail because it found those old commits. This can be fixed by telling `git log` to stop when the directory disappears from the repo. There has to be a commit with removes the old content, because otherwise `git subtree add` doesn't work. Fixes: kubernetes-csi/external-resizer#21
verify-subtree.sh: relax check and ignore old content
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: pohly If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is work-in-progress because there is no automated testing for this and, more importantly, it still doesn't quite work for me due to a permission issue:
That was on a slightly older revision for Kubernetes 1.13 though. |
/test pull-sig-storage-csi-driver-host-path-kubernetes-master |
There is an E2E test now and the permission issue has been root-caused (removing RBAC rules before pod has terminated). |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
Without these line breaks, the help text is rather difficult to read: -pod-info-mount This indicates that the associated CSI volume driverrequires additional pod information (like podName, podUID, etc.) during mount.When set to true, Kubelet will send the followings pod information during NodePublishVolume() calls to the driver as VolumeAttributes:- csi.storage.k8s.io/pod.name: pod.Name - csi.storage.k8s.io/pod.namespace: pod.Namespace - csi.storage.k8s.io/pod.uid: string(pod.UID) With line breaks, it now looks like this, which is probably what was intended: -pod-info-mount This indicates that the associated CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount. When set to true, Kubelet will send the followings pod information during NodePublishVolume() calls to the driver as VolumeAttributes: - csi.storage.k8s.io/pod.name: pod.Name - csi.storage.k8s.io/pod.namespace: pod.Namespace - csi.storage.k8s.io/pod.uid: string(pod.UID)
Kubernetes sends SIGTERM, not SIGINT, when it asks processes to terminate. Fixes: kubernetes-csi#40
The code handling shutdown had a theoretic race condition (one goroutine creating the CSIDriver object, the other removing it). It's cleaner to do it without relying on goroutines. Catching all termination signals makes the code less dependent on Kubernetes specifics. Testing this feature showed that an orderly removal of a CSI driver is non-trivial. This needs to be documented. While at it, the documentation gets improved (more examples, remove remaining references to CRD, spelling).
In repos that have a test/e2e, that test suite should be run separately because it depends on a running cluster.
This is an unmodified copy of kubernetes/hack/verify-shellcheck.sh revision d5a3db003916b1d33b503ccd2e4897e094d8af77.
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
This vendors the Kubernetes E2E framework and implements a custom E2E test which (currently) deploys the cluster-driver-registrar together with the mock CSI driver and tests that CSIDriver object creation and removal work as expected. Quite a few of the E2E dependencies will go away in Kubernetes 1.15 once kubernetes/kubernetes#75340 is merged.
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-1-13 |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-master |
/test pull-sig-storage-cluster-driver-registrar-kubernetes-master |
These are the modifications that were necessary to call this outside of Kubernetes. The support for excluding files from checking gets removed to simplify the script. It shouldn't be needed, because linting can be enabled after fixing whatever scripts might fail the check.
By default this only tests the scripts inside the "release-tools" directory, which is useful when making experimental changes to them in a component that uses csi-release-tools. But a component can also enable checking for other directories.
This enables testing of other repos and of this repo itself inside Prow. Currently supported is unit testing ("make test") and E2E testing (either via a local test suite or the Kubernetes E2E test suite applied to the hostpath driver example deployment). The script passes shellcheck and uses Prow to verify that for future PRs.
The travis.yml is now the only place where the Go version for the component itself needs to be configured.
Using the same (recent) Go version for all Kubernetes versions can break for older versions when there are incompatible changes in Go. To avoid that, we use exactly the minimum version of Go required for each Kubernetes version. This is based on the assumption that this combination was tested successfully. When building the E2E suite from Kubernetes (the default) we do the same, but still allow building it from elsewhere. We allow the Go version to be empty when it doesn't matter.
While switching back and forth between release-1.13 and release-1.14 locally, there was the problem that the local kind image kept using the wrong kubelet binary despite rebuilding from source. The problem went away after cleaning the Bazel cache. Exact root cause unknown, but perhaps using unique tags and properly cleaning the repo helps. If not, then the unique tags at least will show what the problem is.
Instead of always using the latest E2E tests for all Kubernetes versions, the plan now is to use the tests that match the Kubernetes version. However, for 1.13 we have to make an exception because the suite for that version did not support the necessary --storage.testdriver parameter.
/test pull-sig-storage-cluster-driver-registrar-kubernetes-master |
The temporary fork was merged, we can use the upstream repo again.
/test pull-sig-storage-cluster-driver-registrar-kubernetes-master |
cluster-driver-registrar is deprecated, no need for this PR anymore |
Contains a drive-by fix for the -help message and fixes shutdown handling.