Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_runner: print failing assertion only once with spec reporter #56662

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion lib/internal/test_runner/reporter/spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class SpecReporter extends Transform {
hasChildren = true;
}
const indentation = indent(data.nesting);
return `${formatTestReport(type, data, prefix, indentation, hasChildren)}\n`;
return `${formatTestReport(type, data, prefix, indentation, hasChildren, false)}\n`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pmarchini a simple fix for #56902 could be to set this to true when watch mode is enabled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense! I'll take a look ASAP.
I think we should also add some tests to cover the report in watch mode

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cjihrig How about allowing reporters to receive a watch mode flag via the constructor(if one exists)?
The alternative would be to use getOptionValue('--watch'), since we're in the internals, but off the top of my head, I don’t see a straightforward way for users to specify whether they’re in watch mode or not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t see a straightforward way for users to specify whether they’re in watch mode or not.

It is easy to detect from process.execArgv. I think it's fine to use getOptionValue() here. Ideally, reporters should behave the same regardless of watch mode. I just think this was a miss as part of this change because there was already a difference in behavior with watch mode. Right now I think we should aim to get this fixed ASAP in time for the next release.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, then I think we have two different issues:

  1. IMHO, It makes sense to print the details of the failures while in watch mode, as the whole point of enabling the watcher is to get fast feedback on errors.
  2. We should print on SIGINT, as a user might want to terminate the runner as soon as an error pops up to understand what's going on.

Considering this is quite urgent, I would cover the "standard" use case (--test --watch) in a first PR via getOptionValue(), as the only alternative I see is allowing an option in the spec reporter constructor.

WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've thought about this a bit. I think the best course of action is:

  • Only backport this commit as far as Node 22. Node 20 doesn't really need this change and Node 18 will be EOL soon.
  • On Node 22 and up, we have a 'test:summary' event, which is the last event emitted. This is not impacted by watch mode, so the inconsistency goes away. If this event were to ever be backported to Node 20, then all of this could be backported as well.
  • Update the spec reporter to handle 'test:summary' events. Print the list of errors here instead of in the _flush() function. Also mark this change to only go back as far as the 'test:summary' event.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds like a plan 🚀

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm checking the stream of events of a multi-file run with watch true and I see the following list of events

type: test:enqueue
type: test:dequeue
type: test:enqueue
type: test:enqueue
type: test:dequeue
type: test:complete
type: test:start
type: test:fail
type: test:summary
type: test:complete
type: test:dequeue
type: test:enqueue
type: test:dequeue
type: test:complete
type: test:start
type: test:pass
type: test:summary
type: test:complete
type: test:watch:drained
type: test:plan
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:summary
type: test:plan
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:diagnostic
type: test:summary

I think we should use test:watch:drained to correctly handle the watch use case, as test:summary seems to be reported after each test file.
Even if this means having the error details appear before the summary section, like this:

✖ failing test (*ms)
✔ test has ran (*ms)

✖ failing tests:

*
✖ failing test (*ms)
  Error: failed
      *
      *
      *
      *
      *
      *
ℹ tests 2
ℹ suites 0
ℹ pass 1
ℹ fail 1
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms *
ℹ tests 0
ℹ suites 0
ℹ pass 0
ℹ fail 0
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms *

The alternative is to use test:summary, but only when data.file === undefined.

wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a comment on your PR, but I think you may want some logic in runner.js similar to what already exists for other events.

}
#handleEvent({ type, data }) {
switch (type) {
Expand Down
6 changes: 4 additions & 2 deletions lib/internal/test_runner/reporter/utils.js
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ function formatError(error, indent) {
return `\n${indent} ${message}\n`;
}

function formatTestReport(type, data, prefix = '', indent = '', hasChildren = false) {
function formatTestReport(type, data, prefix = '', indent = '', hasChildren = false, showErrorDetails = true) {
let color = reporterColorMap[type] ?? colors.white;
let symbol = reporterUnicodeSymbolMap[type] ?? ' ';
const { skip, todo } = data;
Expand All @@ -71,10 +71,12 @@ function formatTestReport(type, data, prefix = '', indent = '', hasChildren = fa
} else if (todo !== undefined) {
title += ` # ${typeof todo === 'string' && todo.length ? todo : 'TODO'}`;
}
const error = formatError(data.details?.error, indent);

const error = showErrorDetails ? formatError(data.details?.error, indent) : '';
const err = hasChildren ?
(!error || data.details?.error?.failureType === 'subtestsFailed' ? '' : `\n${error}`) :
error;

if (skip !== undefined) {
color = colors.gray;
symbol = reporterUnicodeSymbolMap['hyphen:minus'];
Expand Down
12 changes: 0 additions & 12 deletions test/fixtures/test-runner/output/assertion-color-tty.snapshot
Original file line number Diff line number Diff line change
@@ -1,16 +1,4 @@
[31m✖ failing assertion [90m(*ms)[39m[39m
[AssertionError [ERR_ASSERTION]: Expected values to be strictly equal:
[32mactual[39m [31mexpected[39m

[39m'[39m[32m![39m[39mH[39m[39me[39m[39ml[39m[39ml[39m[39mo[39m[39m [39m[39mW[39m[39mo[39m[39mr[39m[39ml[39m[39md[39m[31m![39m[39m'[39m
] {
generatedMessage: [33mtrue[39m,
code: [32m'ERR_ASSERTION'[39m,
actual: [32m'!Hello World'[39m,
expected: [32m'Hello World!'[39m,
operator: [32m'strictEqual'[39m
}

[34mℹ tests 1[39m
[34mℹ suites 0[39m
[34mℹ pass 0[39m
Expand Down
23 changes: 0 additions & 23 deletions test/fixtures/test-runner/output/default_output.snapshot
Original file line number Diff line number Diff line change
@@ -1,32 +1,9 @@
[32m✔ should pass [90m(*ms)[39m[39m
[31m✖ should fail [90m(*ms)[39m[39m
Error: fail
*[39m
*[39m
*[39m
*[39m
*[39m
*[39m
*[39m

[90m﹣ should skip [90m(*ms)[39m # SKIP[39m
▶ parent
[31m✖ should fail [90m(*ms)[39m[39m
Error: fail
*[39m
*[39m
*[39m
*[39m
*[39m
*[39m
*[39m
*[39m
*[39m
*[39m

[31m✖ should pass but parent fail [90m(*ms)[39m[39m
[32m'test did not finish before its parent and was cancelled'[39m

[31m✖ parent [90m(*ms)[39m[39m
[34mℹ tests 6[39m
[34mℹ suites 0[39m
Expand Down
9 changes: 0 additions & 9 deletions test/fixtures/test-runner/output/eval_spec.snapshot
Original file line number Diff line number Diff line change
@@ -1,14 +1,5 @@
✔ passes (*ms)
✖ fails (*ms)
Error: fail
*
*
*
*
*
*
*

ℹ tests 2
ℹ suites 0
ℹ pass 1
Expand Down
Loading
Loading