Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node support for --allow-uncaught #2697

Closed
wants to merge 3 commits into from

Conversation

lrowe
Copy link
Contributor

@lrowe lrowe commented Feb 1, 2017

It's super handy to be able to drop into a debugger on test failure. See #1985.

@jsf-clabot
Copy link

jsf-clabot commented Feb 1, 2017

CLA assistant check
All committers have signed the CLA.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.008%) to 82.111% when pulling 0ab5310 on lrowe:allowUncaught-node into bc06b85 on mochajs:master.

@Munter Munter added type: feature enhancement proposal semver-minor implementation requires increase of "minor" version number; "features" status: needs review a maintainer should (re-)review this pull request labels Feb 6, 2017
@danqa
Copy link

danqa commented May 4, 2017

Can anyone tell when this might be merged?

@ScottFreeCode
Copy link
Contributor

Well, this comment is still an outstanding issue as far as I'm aware (it's even in the documentation), so it will take a little digging.

Additionally, it would be helpful to get some background (and/or test cases?) on the first two commits. There doesn't seem to be an issue with pending tests or with detecting/handling the error when I use allowUncaught in the browser in the current version of Mocha, so I am not sure what they're supposed to be fixing. (I'm also not sure off the top of my head what side effects they may have, but that just means we're going to have to dig into that area of the code and find out -- if those changes are in fact needed, that is; if they're not then it would probably be best to remove them and just fix allowUncaught on the CLI.)

@lrowe
Copy link
Contributor Author

lrowe commented May 5, 2017

Without dae3428 running this test:

"use strict";
it("should pass", function() {});

Using bin/mocha --inspect --debug-brk --allow-uncaught test.js. Open the debugger url with Chrome. Enable "Pause on uncaught exceptions". Click run.

Chrome debugger will pause at an Error: done() called multiple times resulting from the call to done(); inside the if (this.allowUncaught) { clause.

Without f76afbc running this test:

"use strict";
it("done called twice", function(done) {
  done();
  done();
});

Using bin/mocha --allow-uncaught test.js fails to report Error: done() called multiple times.

@ScottFreeCode
Copy link
Contributor

Thanks! I was able to confirm both of these experimenting with the code locally. Some interesting things I discovered in doing so:

  • The error fixed by dae3428 wasn't showing up for me before because of the same issue that f76afbc fixes (presumably the error fixed by dae3428 was there all along and simply not reported till f76afbc fixed the reporting of double-done)
  • dae3428 changes the code to match what's inside the try of the non-allow-uncaught case (this is just below the end of what the diff shows or I would have noticed before). So I'm pretty sure that's right. In fact, I kinda think we should refactor that common bit of code into a function that can be called in the "if allowUncaught" branch and in the try without any hypothetical subsequent changes getting out of sync again.

So besides the original question of "why was allowUncaught only intended for the browser initially?" (I wonder if there simply were no Node debugging techniques available, or none supported by Mocha anyway, at the time allowUncaught was added?)... The only other thing we need to double-check is that the fix in f76afbc doesn't cause double-reporting of anything that was already being reported; I haven't tracked down what all is emitting the error event on the test object -- if somebody can verify that it's not emitting anything that would be detected by allow-uncaught-mode the way it detects normal failures without interfering with the debugger, then that should suffice to be reasonably confident it's safe too. Alternatively (and for general robustness going forward), @mochajs/core is it possible for us to, say, run the test suite as a whole a second time with --allow-uncaught on to make sure we get the same results from all our tests with and without it?

By and large this looks good to me, so hopefully we can double-check those last couple things and get it merged in shortly.

@ScottFreeCode
Copy link
Contributor

Tracked it down: the only place emitting an error event is here:

mocha/lib/runnable.js

Lines 270 to 290 in 4ed3fc5

function multiple (err) {
if (emitted) {
return;
}
emitted = true;
self.emit('error', err || new Error('done() called multiple times; stacktrace may be inaccurate'));
}
// finished
function done (err) {
var ms = self.timeout();
if (self.timedOut) {
return;
}
if (finished) {
return multiple(err || self._trace);
}
self.clearTimeout();
self.duration = new Date() - start;
finished = true;
Which basically means that it is only used to report errors due to done being called more than once.

So both of the fixes should be perfectly safe as far as I can tell.

Copy link
Contributor

@ScottFreeCode ScottFreeCode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having investigated the incidental fixes and found them both necessary and correct, and considering (admittedly without having dug up the history and rationale for sure) that allowUncaught may simply have been left unimplemented in Node on the assumption that it couldn't be used, I think this should get merged unless anyone has any major concerns about determining for certain what the rationale was for only having allowUncaught in the browser.

@boneskull
Copy link
Contributor

Alternatively (and for general robustness going forward), @mochajs/core is it possible for us to, say, run the test suite as a whole a second time with --allow-uncaught on to make sure we get the same results from all our tests with and without it?

That might not be a bad idea, but it's unlikely we'd get the same results, right? Then, we'd need to ensure the results we do get are the results we'd expect.

Assuming allowUncaught currently works in the browser as intended, I don't understand why dae3428 and f76afbc were necessary. Does this mean currently, it doesn't work properly in the browser? Could these two changesets be a standalone bugfix for allowUncaught?

If not, what's special about Node.js that these two changesets are necessary?

@ScottFreeCode Apologies if this is what you were trying to say. 😉

@ScottFreeCode
Copy link
Contributor

@boneskull allowUncaught is currently bugged in the browser (I tested it there with the current code on NPM) in the case where a test makes multiple calls to done (this is reported as a failure without allowUncaught, but is not when allowUncaught is activated), and when that's fixed it incorrectly reports multiple done calls for pretty much every test due to Mocha making a redundant done call internally when allowUncaught is activated (incidentally the fix here makes that code the same as the code in the try block when allowUncaught is off). I think these must just have come up in the process of testing allowUncaught with the CLI. I wouldn't say no to a separate PR fixing them either!

@boneskull
Copy link
Contributor

I'm going to merge this manually. I want to squash dae3428 and f76afbc

@boneskull
Copy link
Contributor

Merged via fb1e083 and 4d1d91d

@boneskull
Copy link
Contributor

@lrowe Thanks!!!

@boneskull boneskull closed this May 9, 2017
@ScottFreeCode
Copy link
Contributor

This has been published in v3.4.0!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
semver-minor implementation requires increase of "minor" version number; "features" type: feature enhancement proposal
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants