-
Notifications
You must be signed in to change notification settings - Fork 159
Add simple e2e tests in Go for no-op success and failure builds #329
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of suggestions but rest looks great
/approve
/lgtm
} else if kuberrors.IsAlreadyExists(err) { | ||
log.Printf("Namespace %q already exists", buildTestNamespace) | ||
} else { | ||
log.Fatalf("Creating namespace %q: %v", buildTestNamespace, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a suggestion
It helps debug if log msg explicitly calls out the word error
eg: log.Fatalf("Error creating namespace %q: %v", buildTestNamespace, err)
|
||
// Delete the test namespace to be recreated next time. | ||
if err := clients.kubeClient.Kube.CoreV1().Namespaces().Delete(buildTestNamespace, &metav1.DeleteOptions{}); err != nil && !kuberrors.IsNotFound(err) { | ||
log.Fatalf("Deleting namespace %q: %v", buildTestNamespace, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar suggestion
log.Fatalf("Error deleting namespace %q: %v", buildTestNamespace, err)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ImJasonH, shashwathi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ko apply -R -f test/ || fail_test | ||
|
||
header "Running Go e2e tests" | ||
GOCACHE=off go test -tags e2e ./test/e2e/... || fail_test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use report_go_test
here instead of go test
so you'll get individual results on testgrid and in the gubernator report.
Also, don't fail_test
here. Fail after checking the results, so you'll get all results of the test. Better run the go tests first, since you're not checking the results here (even better: move everything to a function). Something like:
local failed=0
report_go_test ... || failed=1
run_old_ugly_tests_in_shell_script || failed=1
(( failed )) && abort_test
success
|
||
To run all e2e tests: | ||
``` | ||
GOCACHE=off go test -tags e2e ./test/e2e/... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This info seems to conflict with using count=1
as per https://golang.org/doc/go1.10#test
|
||
// Ensure the test namespace exists, by trying to create it and ignoring | ||
// already-exists errors. | ||
if _, err := clients.kubeClient.Kube.CoreV1().Namespaces().Create(&corev1.Namespace{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice thing, we should adopt it for serving
as well, instead of having to create a test space as a requirement.
…tive#329) * Add simple e2e tests for no-op success and failure builds * Add license headers * Add go tests to e2e-tests.sh
* Address review comments * Unify both types of test failures
…tive#329) * Add simple e2e tests for no-op success and failure builds * Add license headers * Add go tests to e2e-tests.sh
* Address review comments * Unify both types of test failures
This uses some test infrastructure from
knative/pkg
, namely flags and kubeclient setup.It uses a
TestMain
to ensure that the test namespacebuild-tests
exists in the specified cluster (in the process ensuring the cluster exists and is reachable), then runs two builds, one that succeeds and one that fails.Future improvements should eventually replace all the YAML-based tests in
test/
, and more, since with this extra level of control we can do things like check for logs, interact with the build's underlying pod (e.g., what happens when the underlying pod dies?), check timing (e.g., build only ran during its configured timeout), check state transitions (e.g., build wasPending
, then running, then successful), can kill a build while it's running or pending, and can help us track performance metrics over time.Release Note
/assign @shashwathi
/assign @mattmoor