Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Bug: Hook failures change the shape of the test suite #1955

Open
cowboyd opened this issue Nov 5, 2015 · 13 comments
Open

🐛 Bug: Hook failures change the shape of the test suite #1955

cowboyd opened this issue Nov 5, 2015 · 13 comments
Labels
semver-major implementation requires increase of "major" version number; "breaking changes" status: accepting prs Mocha can use your help with this one! type: feature enhancement proposal

Comments

@cowboyd
Copy link
Contributor

cowboyd commented Nov 5, 2015

If a beforeEach hook fails, then all subsequent tests in a suite and all sub-suites are not run. For example:

// hook-test.js
describe("outer context", function() {
  beforeEach(function() {
    throw new Error("this is a failure in a before each hook");
  });
  it("reports the first assertion", function() {

  });
  it("does not report the existence of this test case", function() {

  });
  describe("inner context", function() {
    it("does not report report its existence in the output", function() {

    });
  });

});

reports only a single testcase, even though there are three defined:

$ mocha --reporter min hook-test.js
0 passing (6ms)
  1 failing

  1) outer context "before each" hook for "reports the first assertion":
     Error: this is a failure in a before each hook
      at Context.<anonymous> (test/mocha-test.js:3:11)

As outlined in #1043, this is the intended behavior for both beforeEach as well as other hooks. This makes sense from an efficiency prespective; after all, there is no point in actually running the testcases when it is assured that they are going to fail.

The problem with this is that when you're refactoring a large codebase, or doing something like upgrading an underlying library, or rehabilitating a codebase that has allowed its test suite to get out o sync.... work that might take days or weeks, this behavior alters how many total testcases are reported. So as you make changes, the reporting varies widely. We've seen the pass/total numbers of test cases jump from 95/220, to 4/8, then down to 0/1 in the case of a global hook failure, then back up to 35/160 in a matter of minutes.

This can be very disorienting, and it obscures your overall progress towards your goal which is a completely green suite where all tests are passing. The fact that a test is not run, does not mean that it doesn't exist, and that it is not important.

Rather than exclude a test completely from the output, it makes more sense to not run it, but still report it as a failure. That way, the shape of the test suite remains constant. If I have 225 testcases, then that's what I have, even if only 26 of them are passing. I at least know that I'm 12% there, which test suites are related by the same failure, and I can track total progress.

If this makes sense, it would be something I'd be happy to help implement.

@cowboyd
Copy link
Contributor Author

cowboyd commented Nov 11, 2015

Any thoughts on this? I'm cc/ing @travisjeffery @boneskull and @danielstjules since you all seem to be most active in the issues.

As mentioned, I'd be willing to take a stab at implementing this if I thought that the PR were likely to be accepted. Cheers!

@travisjeffery
Copy link
Contributor

if the code to implement the fix isn't too crazy i'd likely accept it

@cletusw
Copy link

cletusw commented Apr 5, 2016

I'm also experiencing this. I work around it by running mocha programmatically, and manually marking any tests that match my current grep but have no state set as failed.

@parxier
Copy link

parxier commented Jun 28, 2016

@cletusw can you please post an example on how you:

  • run mocha programmatically
  • interpret and enrich results (set state as failed)

@cletusw
Copy link

cletusw commented Jun 28, 2016

I use my own mocha-parallel-js library (runner.js is the part that runs Mocha) then do the following to get it in the format that I want:

var tests = [];
// rootSuite is from Mocha
if (rootSuite.suites) {
  // grep is provided by the user and also passed to Mocha
  var userGrep = new RegExp(grep);
  tests = flatten(rootSuite).filter(function (test) {
    // Remove tests that didn't run because of a grep
    return userGrep.test(test.fullTitle);
  });

  tests.forEach(function (test) {
    // When a beforeEach fails, mocha skips the remaining tests in the
    // suite but doesn't mark them as failed, which makes the PASS/FAIL
    // numbers weird. If a non-pending, grep-matching test didn't run, we
    // just mark it as failed.
    test.pending || test.state || (test.state = 'failed');
  });
}

function flatten(suite) {
  suite.fullTitle = ((suite.parent && suite.parent.title) ? (suite.parent.title + ' ') : '') + suite.title;
  suite.topSuiteTitle = (suite.parent && suite.parent.topSuiteTitle) ? suite.parent.topSuiteTitle : suite.title;
  if (suite.tests) {
    suite.tests.forEach(function(test) {
      test.fullTitle = suite.fullTitle + ' ' + test.title;
      test.topSuiteTitle = suite.topSuiteTitle;
    });
  }
  return suite.suites.map(flatten).reduce(function (a, b) {
    return a.concat(b);
  }, suite.tests || []);
}

@cletusw
Copy link

cletusw commented Aug 15, 2016

@cowboyd Are you working on this by any chance? This is a huge problem for us and I'd really like to remove the hack I noted above.

@mekdev
Copy link

mekdev commented Oct 18, 2016

Hi guys, I would second the priority of this also.
This is causing inconsistent results for us. We also wrote a custom reporter to mark the whole suite as need investigation since a beforeEach() hook failure will mean some tests are missing from the result.

@cowboyd
Copy link
Contributor Author

cowboyd commented Oct 18, 2016

@cletusw I'm not working on this no. It's in that category of papercuts that we're willing to cope with because the solution seems like a lot of work. I'd be willing to collaborate, pair program etc.. , but can't take the lead implementing this directly.

@boneskull
Copy link
Contributor

Haven't got a chance to look at this. doesn't sound too bad, unless the reporters wouldn't understand.

@drazisil drazisil added the type: feature enhancement proposal label Mar 30, 2017
@nlundquist
Copy link

Even if this required changes to reporters it would be worthwhile

@shridharkalagi
Copy link

Any update on this issue? I'm also facing the similar problem

@vinczedani
Copy link

We are also having the same issue, for us it would be logical to see the test cases as failed if the before or beforeEach hook fails, now they just disappear

@JoshuaKGoldberg JoshuaKGoldberg changed the title Hook failures change the shape of the test suite 🐛 Bug: Hook failures change the shape of the test suite Dec 27, 2023
@JoshuaKGoldberg
Copy link
Member

JoshuaKGoldberg commented Dec 27, 2023

Not sure if semver major or minor 🤔

Edit: ...just to be sure, let's go with semver major.

@JoshuaKGoldberg JoshuaKGoldberg added status: accepting prs Mocha can use your help with this one! semver-major implementation requires increase of "major" version number; "breaking changes" labels Jan 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
semver-major implementation requires increase of "major" version number; "breaking changes" status: accepting prs Mocha can use your help with this one! type: feature enhancement proposal
Projects
None yet
Development

No branches or pull requests