-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test/e2e: improve nginx deployment test #2044
test/e2e: improve nginx deployment test #2044
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to me. Thanks @wainersm!
i don't have a lot of context, but in my (manual) tests I'm always using deployments. but, essentially, deployments are higher-order kubernetes constructs, as a primitive they should not be relevant for CAA. Thus, if we hava a Deployment test that will spawn Pods w/ a Pod spec that is 1:1 matching the Spec for a (reliable) Pod test, I'd conclude that there are problems in our testing suite (race conditions probably). Maybe it makes sense to remodel the deployment tests to mirror the pod test closely. |
I also noticed that the test cause side-effects that makes it impossible to run them in parallel. Essentially the asserts of a test find leftovers of services or other k8s objects, because they use the same name. So, it might make sense to create some logic that will make a test independent, because there is a random segment in the names |
We currently have a different namespace for each test run. We picked that to make it easier to clean everything up together if it went wrong, but if you think it would help to have a random namespace for every tests, then we could consider that? |
yeah, ideally, within a test-run the tests should also be independent, so maybe running them in their own namespace would indeed be a good measure, it'll also help with cleanup. we can use a label: |
Hi @mkulke ,
I don't remember the context that nginx deployment test was added, the original commit doesn't explain either, so I don't know for sure. Well, it's exercising In the end, I'm trying to understand whether I should address the static checker errors aiming to get this merged OR drop the nginx deployment test OR take another action :) |
Makes sense to me. Created an issue to discuss that and other stuffs for running the tests in parallel: #2092 |
ead2efe
to
a3749ee
Compare
@stevenhorsman rebased and resolved conflicts. I will run it locally and past the results here as this test is disabled on CI. |
I've had some success in #2183 uncommenting some of the failed tests, so do you think it's worth trying to un skip it as part of this PR, or do you just want to get these improvements in for local tests at the moment? |
@stevenhorsman nginx deployment failed locally. It might be my environment. The good thing is that this failure ended up testing the changes I'm proposing here :) So I think it's better to submit a separated PR with the tests that passed on PR 2183
|
Fair enough. I guess we get this in and worry about the other failure later? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
linter is sad, but otherwise lgtm
Fixed the printing of the pod name. Also will print an user-friendly message when there isn't any pod logs to show. Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
a3749ee
to
f49770a
Compare
Thanks! I had forgotten the linter warning... I sent a fix on a separated commit; if you guys ack then I can apply the fixup before merging. |
The teardown function doesn't run if WithSetup fail, which might leave the deployment on the cluster and it might mess with next tests. This replaced some t.Fatal() with t.Error() so that the current goroutine doesn't exit, in fact, the execution continues after the t.Error() call but the test is marked failed. So I had to proper return when t.Error(). On WithSetup() failing, it doesn't make sense to run the Assess() and the only way to pass the status down to Assess() is through the `ctx` variable (due to a limitation on the k8s e2e framework, the `t` object is not shared). Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
f49770a
to
8eacfb2
Compare
Just applied the fixup. |
The nginx deployment test isn't completely stable. Sometimes it fails and we don't know yet whether it's a legit bug on peer pods or not. With this PR I don't solve that problem either, as I still don't know the roots of the problem, however, it improves the test that it will run the
Teardown()
function even whenWithSetup()
failed. We have this suspect that the left nginx deployment might be messing with the next tests, so this change will at least solve that problem (if it's really a problem). Regardless, tests should always delete their resources at the execution end.There is one thing that I'd like to implement, which is, do not run Teardown() if not running on CI. In other words, if I'm running the test locally and it fails I want to have the deployment running so I can inspect. I may be sending this on another PR or maybe even on this one....depending on the reviews.