-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: refactor tests for easier isolated execution #15437
Conversation
Refactor tools/test.py to allow nested parallel/sequential/pummel tests. Refactor tools/test.py to blacklist suites rather than whitelist. Makefile now allows CI_JS_SUITES and CI_NATIVE_SUITES to be overridden. Introduce special DEFAULT_JS_SUITES rather than maintaining list of suites. Delete stale test-http-client-reconnect-bug.js test.
It's admittedly ugly that we would need to touch all the test files to enforce this new structure. I am however already seeing benefits, it's nice to be able to run coverage for just
|
CI_NATIVE_SUITES := addons addons-napi | ||
CI_JS_SUITES := abort async-hooks doctool inspector known_issues message parallel pseudo-tty sequential | ||
CI_NATIVE_SUITES ?= addons addons-napi | ||
CI_JS_SUITES ?= DEFAULT_JS_SUITES |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a magic variable which test.py understands, my thinking is that this would prevent us from accidentally forgetting to update this list in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this removes known_issues
, then we should explicitly add known_issues
to test-ci
. Alternatively, there can be a JS_SUITES
variable and CI_JS_SUITES
variable that is JS_SUITES
+ known_issues
.
EDIT: Uh, that is, unless test-ci
is used to generate coverage tests. I don't know if coverage reports are based on make test
or make test-ci
or something else...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Trott I think this would be smart, main concern is I think we should remove known_issues
from the coverage run, so split the configuration a bit between test-ci
and test-coverage
.
EDIT: I think coverage actually defaults to make test
, so I'd advocate introducing test-cov
that basically just drops known_issues
from the suites run so that they don't reflect in coverage (CC: @addaleax).
'gc', | ||
'internet', | ||
'pummel', | ||
'test-known-issues', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seemed reasonable to me that test-known-issues
should not be part of the default test run, my main concern being that this could lead to inaccurate coverage results.
That already works: I'm not a fan of the huge amount of churn. I still get annoyed on a weekly basis by the test/simple split into test/parallel and test/sequential and that at least introduced the benefit of running tests in parallel. |
If this happens, you should be prepare to make backports for all active release lines ;) Also, for a bit of history and previous discussion, see #6217
We might just need to advertise that more, yes … |
@bcoe ... Thank you for working on this. I'll have to dig into the details a bit later but I'm a big fan of this and have wanted to see this kind of refactoring for quite some time. And yes, if this goes through we'll definitely need to backport. |
@@ -32,6 +32,7 @@ expect throughout each step of the process. | |||
* [Commit message guidelines](#commit-message-guidelines) | |||
* [Step 5: Rebase](#step-5-rebase) | |||
* [Step 6: Test](#step-6-test) | |||
* [Step 6.5: Test Coverage](#step-65-test-coverage) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section has great info. I'm not sure if CONTRIBUTING.md is the place for it. Can we move it to it's own document in doc/guides
and link to it from CONTRIBUTING.md? CONTRIBUTING.md is the file people get a link asking them to read when they are in the process of opening a pull request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, wasn't quite sure where to put it ... I like the idea of a adding it to writing-tests.md
or creating its own guide depending on what others are feeling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to putting it in its own guide.
Was thinking about this myself, without introducing all the churn in test names we could make
Unless I'm missing something, it's the Makefile that combines the coverage reports and outputs reports, so I think at a minimum this would be a nice change (making these variables configurable) so that the feedback loop when testing with coverage is lovely. |
I have a feeling this structure will cause confusion for the tests we have that involve multiple modules (e.g. |
Once I moved a couple sub-systems into their own top-level folder, I definitely got somewhat spooked by the large amount of files that would change it git history. The main benefit I see is that you would now be able to more easily able to run
vs.,
or perhaps:
I also do think it could put us in a position where answering the question where should I add a test gets a bit easier. Test suites could default to being structured like I think what I have found most confusing about the test structure currently is that some tests have snuck into their own top-level submodule specific folders, e.g., Perhaps an alternative refactor of the test structure would be: Step 1
Step 2Double down on the current directory structure:
|
looping in @Fishrock123, given their original interest in a similar refactor.
I don't immediately have a great answer to this; I pictured that we'd probably keep the top-level |
Is there a way to potentially add meta data to the test itself to introduce subsystems rather than relying on the file system?
I think this is great work fwiw, just trying to spitball some alternative ideas that could involve less churn / be more flexible
|
@MylesBorins @bnoordhuis what if we did start to collapse tests back into a single folder, but introduced the following: // Type: sequential
// Flags: --experimental-modules where type might be something like, @MylesBorins you're picturing something more along the lines of continuing to have
If there isn't consensus around isolating tests, I think introducing a helper for running a specific subsystem would be a good move that introduces less churn. |
I don't think putting the 'type' in each file is going to be very efficient. Most people are just going to do Having 'type' as another metadata comment can still have the same problem as the original separate directory per module proposal. |
We could look for the type metadata at the same time as when searching for |
That wouldn't help in the common |
At least for me, the natural way to solve this would be to allow |
@gibfahn I am pretty sure that already works, I’ve run |
Yes, that works but it's fairly obscure and does not solve the general It's-Difficult-To-Manage-Thousands-Of-Tests-All-In-One-Directory problem. |
I don't know, using file globbing is the standard command-line way to specify filenames, and you could actually just have the shell expand the filenames directly, like: tools/test.py test/*/test-http2* It's certainly the first thing I thought of.
So what are the specific issues with having large numbers of files in each directory? I know there's the "Github can't render more than 1000 files in one dir" issue, but what else are people running into? I don't think the current situation is ideal, but I also think @bnoordhuis and @mscdex make valid points about issues with this approach. 99% of test runs are |
@gibfahn @jasnell given the feedback on this pull-request, it feels like it would be difficult to reach consensus about splitting tests into subsystem-specific folders. I'm currently working on an alternative pull-request that I think might be a good compromise ... and give me a lot of what I was itching for when running tests:
I think the work described above would improve the feedback loop testing, without requiring that we move thousands of test files into a new structure out of the gate. |
@bcoe ... Yep. I look forward to whatever you come up with :) |
Problem Statement
I've been eyeballing test coverage reports, as a way of finding subsystems where I can pitch in and help write more tests. In this process I believe I've identified a few aspects of the testing feedback loop that can be improved:
run coverage for just one submodule, e.g., http2.).
tests in a submodule-specific folder, e.g, @bmeck's tests for
es-modules
can befound in
./test/es-modules
, other tests are placed in the catch-all./test/parallel
folder../test/parallel
folder, which makes it hardto easily work with in some text editors.
CI_JS_SUITES
.Proposed Solution
This pull requests introduces several mostly backwards-compatible changes to the test runner, in an effort to streamline the testing feedback loop:
CI_NATIVE_SUITES
andCI_JS_SUITES
have been made configurable, so thatmake coverage
can be run against a subset of suites, e.g.,CI_JS_SUITES=child-process* CI_NATIVE_SUITES= make coverage
.tools/test.py
has been refactored, such that it now performs a recursive walk of folders and supports nested suites, making it easier to group tests by subsystem:./test/parallel
can be split into subsystem specific suites, using the refactoredtools/test.py
.the special variable
DEFAULT_JS_SUITES
has been introduced. When this is observedtest.py
will use a programmatically generated list of reasonable-default test suites.Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passesAffected core subsystem(s)
test, build
CC: @addaleax, @MylesBorins, @jasnell