Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running t.FailNow() in Assess does not fail the following assess #386

Closed
Fricounet opened this issue Feb 19, 2024 · 4 comments · Fixed by #391
Closed

Running t.FailNow() in Assess does not fail the following assess #386

Fricounet opened this issue Feb 19, 2024 · 4 comments · Fixed by #391
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@Fricounet
Copy link
Contributor

Fricounet commented Feb 19, 2024

What happened?

When running tests without the fail-fast flag, if t.FailNow() is called, the following assess will still run. I think this is an error because calling FailNow means that the test has failed and there is no point continuing.

What did you expect to happen?

I expect the feature being tested to not run all the following assess when 1 of them fails with FailNow.
I know this can be achieved with -fail-fast, however, I have 2 issues with it:

  • it can only be set globally, meaning that it will affect all other tests and features while this is a behavior I might want in only 1 of my tests. Also, the test writer might not have the ability to change the global flag value. Respecting the difference between FailNow and Fail put the decision in the hand of the test writer who probably knows better when the feature should stop running.
  • when setting -fail-fast, it will also fail the other features of the test. In some cases, features might be separate enough that a failure in one won't impact the others in which case it is not wanted to fail all of them.

How can we reproduce it (as minimally and precisely as possible)?

Example code:

package example

import (
	"context"
	"os"
	"testing"

	"sigs.k8s.io/e2e-framework/pkg/env"
	"sigs.k8s.io/e2e-framework/pkg/envconf"
	"sigs.k8s.io/e2e-framework/pkg/features"
)

var testenv env.Environment

func TestMain(m *testing.M) {
	testenv = env.New()
	os.Exit(testenv.Run(m))
}

func Test1(t *testing.T) {
	feat := features.New("Test1").Assess("Assess1", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
		t.Fatal("This should run")
		return ctx
	}).Assess("Assess2", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
		t.Log("This should NOT run")
		return ctx
	}).Feature()

	testenv.Test(t, feat)
}

Result:

❯ go test . -test.v
=== RUN   Test1
=== RUN   Test1/Test1
=== RUN   Test1/Test1/Assess1
    main_test.go:38: This should run
=== RUN   Test1/Test1/Assess2
    main_test.go:41: This should NOT run
--- FAIL: Test1 (0.00s)
    --- FAIL: Test1/Test1 (0.00s)
        --- FAIL: Test1/Test1/Assess1 (0.00s)
        --- PASS: Test1/Test1/Assess2 (0.00s)
FAIL
FAIL	sigs.k8s.io/e2e-framework/pkg/envfuncs	0.009s
FAIL

Anything elese we need to know?

I took a look at the code and it should be fairly easy to fix by catching if FailNow was called around here. I can submit a PR if that is ok

E2E Provider Used

kind

e2e-framework Version

v0.3.0

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
@Fricounet Fricounet added the kind/bug Categorizes issue or PR as related to a bug. label Feb 19, 2024
@harshanarayana
Copy link
Contributor

@Fricounet xref: #112

I thought we fixed this a while ago! Let me take a look at this over the weekend again.

@harshanarayana
Copy link
Contributor

I took a look at the code and it should be fairly easy to fix by catching if FailNow was called around here. I can submit a PR if that is ok

@Fricounet Yes please. Will be happy to accept a PR for this

@Fricounet
Copy link
Contributor Author

I thought we fixed this a while ago! Let me take a look at this over the weekend again.

Maybe I don't understand correctly but the issue you linked was marked as closed with the implementation of -fail-fast flag. However, as mentioned in the description, this approach has several drawbacks:

  • it affects all tests because it is a global flag
  • it also affects features

While in my case I just think that using FailNow inside a test should, well, fail now the feature regardless of whether the flag is set.

I'll have a PR for it this week I think :)

@harshanarayana
Copy link
Contributor

@Fricounet Ack. I think I got confused while reading the description and pointed you to an old PR. Please feel free to open a contribution PR. Will be greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants