-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: testing: add a flag to detect unnecessary skips #25951
Comments
It's an interesting idea but I note that some tests are skipped not because they fail but because they impose excessive resource costs on some systems (beyond what |
That sort of use-case is part of the reason I proposed a regular expression rather than a boolean flag. However, it certainly remains a usability consideration: for example, we wouldn't want to encourage people to run |
It would also be nice to handle flaky tests somehow. Some tests are skipped because they are flaky on some platforms. So the fact that they pass once doesn't indicate much one way or another. For non-flaky tests I wonder if test.XFail("reason") would suffice to address this problem. It would run the test as usual but flip the meaning of the result: a failure is a pass, a pass is a failure. You mentioned this possibility in the original proposal; why not use it for Go? |
I agree that having an "expected failure" mode would be helpful. A potentially related issue I've encountered in the past is making sure that CI doesn't skip tests when it shouldn't. For example, suppose that you write your tests to be skipped if a SQL database isn't running, and you set up one as part of CI. There currently isn't a way to tell |
That would probably suffice going forward, but introduces a migration problem: existing tests already use That said, migrating |
XFail sounds interesting on its own right, but it still doesn't handle Skip-for-flaky. Let's put this on hold and circle back to package testing at some point later, during the Go 2 library phase. |
I had the same feature request and decided to implement it as a learning exercise and potential contribution. It does not require a new flag. It includes tests, but needs additional documentation: 13rac1@a9a08b59 Passing Test:
Passing Results:
Failing Test:
Failing Results:
edited SHA after rebase |
Here's a neat example: Lines 4692 to 4694 in a813d3c
Added in CL 42533 (May 4, 2017), rendered obsolete in CL 46004 (June 16, 2017), and skipping the test needlessly on an entire OS ever since. |
While this code may not satisfy the original purpose stated here, it's still very valuable for everyone coming from pytest testing methodologies. tl;dr you are able to merge tests as they come, and if your CI is based on "everything passes or go home" it is not the end of the world; neither does code rot away in some personal repo, inactive. As the change is not "utterly disruptive", it could very well be on any minor version before v2. |
The decision to put this proposal on hold is from June 2018; would it be possible to reconsider it now, since in my (limited) understanding Go 2 is still undecided? My 2 cents is that also a simple XFail, that doesn't cover the case for skip-for-flaky, is still very useful in its own right, as the examples from @bcmills show. Another advantage of a simple XFail is that it is "always on": no need to run the tests with a special flag "just in case something has been fixed and we should remove the Skip". As soon as a test marked XFail returns successfully, it will be reported as unexpected success. Last advantage of a simple XFail is its immediate understandability by anybody seeing it for the first time. |
Having something like XFail wiuld also be helfpul for when building, or building on top of, testing helpers and frameworks. For example, I'm developing a custom matcher for gomock, and like a test ensuring that certain inputs cause the test to fail. Currently there's no way I can see to do that. The bare-bones approach of |
See also #39903 |
Motivation
We often use
(*testing.T).Skip
to skip tests that fail due to known bugs in the Go toolchain, the standard library, or a specific platform where the test runs.When the underlying bugs are fixed, it is important that we remove the skips to prevent regressions.
However, bugs are sometimes fixed without us noticing — especially if they are platform bugs. In such cases, it would be helpful to have some way to detect that situation short of editing and recompiling all of the tests in a project.
Here is a concrete example in the standard library: a
t.Skip
call that references an issue that was marked closed back in 2016 (#17065). Given how easy that call was to find, I expect that similarly outdated skips are pervasive.Proposal
A new boolean flag for the
testing
package, tentatively named-test.unskip
, with the correspondinggo test
flag-unskip
.-test.unskip
takes a regular expression, which is matched against the formatted arguments oft.Skip
ort.Skipf
or the most recent test log entry beforet.SkipNow
.If a
Skip
matches the regular expression, the test does not stop its execution as usual: instead, it continues to run. If the skipped test passes, the test is recorded as unnecessarily-skipped and the binary will exit with a nonzero exit code. If the skipped test fails, the test log (and the exit code of the test binary) ignores the failure and proceeds as usual.Comparison
Test frameworks in a few other languages provide an explicit mechanism for expecting a test to fail. For example, Python has
@unittest.expectedFailure
; Boost has anexpected_failures
decorator; RSpec has the pending keyword.The text was updated successfully, but these errors were encountered: