-
Notifications
You must be signed in to change notification settings - Fork 30.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_runner: pass signal on timeout #43911
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
'use strict'; | ||
const common = require('../common'); | ||
const assert = require('assert'); | ||
const { spawnSync } = require('child_process'); | ||
const { setTimeout } = require('timers/promises'); | ||
|
||
if (process.argv[2] === 'child') { | ||
const test = require('node:test'); | ||
|
||
if (process.argv[3] === 'abortSignal') { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there any reason not to do this in a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. that is already checked here https://github.com/nodejs/node/blob/389b7e138e89a339fabe4ad628bf09cd9748f957/test/message/test_runner_abort.js There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. From what I can tell, that file doesn't test the case where input validation throws. I pulled down your branch and ran this: const test = require('node:test');
test(); It generated correct TAP output. Next, I ran this (taken from this PR): const test = require('node:test');
test({ signal: {} }); There was no TAP output generated, only an uncaught exception. We need to decide what the test runner should do when input validation fails. The current behavior seems undesirable since we could fail the test with the exception instead of crashing. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think I would expect not to generate TAP output if a validation check fails, it's not a test failure, the test simply doesn't run because the test runner couldn't make sense of what the user is asking. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree the test runner should try to minimize the cases where the output is not valid TAP since tooling does depend on the output. |
||
assert.throws(() => test({ signal: {} }), { | ||
code: 'ERR_INVALID_ARG_TYPE', | ||
name: 'TypeError' | ||
}); | ||
|
||
let testSignal; | ||
test({ timeout: 10 }, common.mustCall(async ({ signal }) => { | ||
assert.strictEqual(signal.aborted, false); | ||
testSignal = signal; | ||
await setTimeout(50); | ||
})).finally(common.mustCall(() => { | ||
test(() => assert.strictEqual(testSignal.aborted, true)); | ||
})); | ||
} else assert.fail('unreachable'); | ||
} else { | ||
const child = spawnSync(process.execPath, [__filename, 'child', 'abortSignal']); | ||
const stdout = child.stdout.toString(); | ||
assert.match(stdout, /^# pass 1$/m); | ||
assert.match(stdout, /^# fail 0$/m); | ||
assert.match(stdout, /^# cancelled 1$/m); | ||
assert.strictEqual(child.status, 1); | ||
assert.strictEqual(child.signal, null); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why use a subprocess here? Wouldn't it be simpler to test in the same process?
Otherwise we might as well do a message test instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test()
never rejects for the same reason discussed above - the tap output will not be reliable otherwiseThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this argument, when running on the CI, users will use
--test
CLI flag, which wraps the files in a valid TAP output, I don't think it makes sense to try to force a valid TAP output in all circumstances if the flag is not there.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me rephrase: it is not just about the TAP output, but about what is the essential API of a test runner and what a test runner is expected to do,
by definition, a test catches errors/rejections and does not throw when it caught one - it reports a failure in a standard way so whoever is looking (ci/human) can know the test failed.
if you simply throw any error you catch - there is no real reason to use a test runner (correct me if I am missing anything?)
the only difference here is that this error is not generated within the test/by the test author - but by the test runner.
the question to be asked is "is a timeout considered a test failure?" if yes then it should behave the same as
test(() => Promise.reject(new Error())
which definitely should not rejectI disagree about only relating to that as the test runner. users should be able to also run a single test file and get valid output, or run multiple files using any logic wanted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would rejecting the promise be at odd with any of those goals really? I don't see a reason why we can't have both:
test(Promise.reject(new Error())
returns a rejected promise, and we output valid TAP output on stdout.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this piece of code with a few frameworks (syntax adjusted a little per framework):
What you suggest (unless I misunderstood) is that only tests 1, 1.1 need to run - and I think users expect all three 1, 1.1, 1.2 to run
test
does not return a promise)test
does not return a promise)Returning a Promise from "describe" is not supported. Tests must be defined synchronously.
) in additiontest
does not return a promisetest
does not return a promise)test
actually returns a promise)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see why test runners that don't return a promise would do that, it's weird to me that a promise resolving when the previous test fails. Anyway, if that's the consensus, let's roll with it, although it makes testing the test runner a bit trickier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the fact that a test framework returns a promise can be confusing, see this issue as well:
nodejs/help#3837
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this makes me think that a test runner should probably not return a promise since it kind of violates IOC