Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Should be able to assert an "Inconclusive" (Pending) state for a unit test #395

Closed
theficus opened this issue Jul 24, 2015 · 19 comments
Assignees

Comments

@theficus
Copy link

I've noticed that if you run a test without any body, it appears to return with a pending state. There doesn't appear to be any way to force a test with a body to exit with this state.

In my case I have some tests that are dependent on a specific configuration or binary type. If I run all tests, returning a "pass" result for these type of tests isn't really correct because the test didn't do anything. Returning a "fail" result isn't correct either because the test didn't actually fail. Either one of these binary pass/fail results is incorrect and just confuses the test output because it's not entirely accurate.

MSTest has a concept for this with Assert.Inconclusive. This works around the problem nicely by being able to assert that your test ran but didn't pass or fail.

It would be great if Pester had a similar concept to allow you to exit a test in this inconclusive state.

@theficus theficus changed the title Feature request: Should be able to assert an "Inconclusive" state for a unit test Feature request: Should be able to assert an "Inconclusive" (Pending) state for a unit test Jul 24, 2015
@dlwyatt
Copy link
Member

dlwyatt commented Jul 24, 2015

It has a -Pending switch which causes the test to be flagged that way. It's not quite the same as "Inconclusive", but may be fine for what you need.

@theficus
Copy link
Author

-Pending doesn't do what I want. When I use -Pending, it just skips the test completely.

I have no way to go into a test, perform some checks, and then explicitly say "this isn't a pass or fail but pending".

@nohwnd
Copy link
Member

nohwnd commented Jul 25, 2015 via email

@theficus
Copy link
Author

Example: I have a test that requires a debug build of a cmdlet. If I'm using a retail build a pass isn't correct nor is a fail.

Sent from my phone. Expect brevity and typos.

On Jul 25, 2015, at 11:40, Jakub Jare? <notifications@github.commailto:notifications@github.com> wrote:

Pending means that the test is work in progress. It's only purpose is to enable you to mark one or more tests as unfinished. Typically when the test is empty or you need to postpone working on the current scenario to fix some other issues.

Could you post an example test where making it inconclusive would be useful?

Reply to this email directly or view it on GitHubhttps://github.com//issues/395#issuecomment-124871354.

@nohwnd
Copy link
Member

nohwnd commented Aug 6, 2015

This seems like a too much complexity. You will end up with test runs where some of the tests are inconclusive for one reason and some are inconclusive for some other reason. Better approach would be to run the test only for the debug build and not run it for other builds. If you move your debug tests into separate test file you can do that pretty easily even in the current version of Pester.

@theficus
Copy link
Author

theficus commented Aug 6, 2015 via email

@dlwyatt
Copy link
Member

dlwyatt commented Aug 7, 2015

It's not difficult to implement. What do you think the command name should be in Pester? Set-TestInconclusive, or something like that?

@theficus
Copy link
Author

theficus commented Aug 7, 2015

I was thinking over possible implementations and something like this seems pretty reasonable. Set-TestInconclusive or Set-TestResultInconclusive is probably reasonable.

@dlwyatt
Copy link
Member

dlwyatt commented Aug 7, 2015

As far as implementation, I'm thinking that the new command will just throw a particular exception / ErrorRecord, and the catch block that's already in the It command will check for that.

@dlwyatt dlwyatt self-assigned this Oct 13, 2015
@matt-richardson
Copy link

Just ran into this.

My use case is testing some code that calls into the windows failover clustering cmdlets. However, we are testing our code, not the failover clustering cmdlets, which may not be installed on all developers computers.

It would be nice to do:

$cmd = (Get-Command "Get-ClusterResource" -errorAction SilentlyContinue)
if ($cmd -eq $null)
{
    Set-TestResultInconclusive "This computer does not have windows failover clustering cmdlets available."
}

We have worked around it (for now) by just creating stub functions that we then Mock. Bit odd, but given that we are just testing our code, it works...

@nohwnd
Copy link
Member

nohwnd commented Oct 13, 2015

@matt-richardson What would be that for? Your code depends on the modules. So if the module is missing your code should fail, and so your tests should fail as well. Making such test hides the fact that your code would fail in production and more importantly it introduces noise to the TDD cycle.

Personally I think a better approach would be to split unit and integration tests, and simply run integration tests only on systems with all the dependencies installed.

On the rest of the systems you can still test your logic by Mocking the dependencies, as you are doing now. If you are interested I recently had a discussion about mocking missing SharePoint modules here and here, where you can find a small function to generate the Module stubs for you.

@theficus
Copy link
Author

Why not recognize this is desired functionality for some developers? Microsoft and nUnit clearly does as they have this "inconclusive" concept built into their test harnesses.

@matt-richardson
Copy link

@nohwnd I subscribe to the "someone should be able to get the code from source control, and it will build". I dont want people to have to install dependencies to make my script tests work. I also use chef to do provisioning, so I know that a given box that I'm going to deploy onto is in a given state. However, I am not willing to make all developer boxes look like all production boxes in terms of dependencies (that are not related to anything most developers are actually working on).

I also dont want to have separate sets of tests - the stuff I'm testing is relatively simple, and I dont want to complicate it.

In my scenario, having an inconclusive option would work well for me. I dont expect that everyone is going to use it, nor should they have to. If other people want to handle it other ways, that's cool. But as @theficus has said, some people would like this functionality. Another point in its favour is that other frameworks have obviously seen the need for this, and people are using it there (which makes people want it here). A final point is that it is a small and quick addition.

@nohwnd
Copy link
Member

nohwnd commented Oct 14, 2015

@matt-richardson @theficus Okay, I give up. :) Are you going to create a PullRequest? Or should we add the functionality?

@matt-richardson
Copy link

I believe that @dlwyatt is already on the case - I had a brief chat to him about it yesterday.

@dlwyatt
Copy link
Member

dlwyatt commented Oct 14, 2015

Looks like in our NUnit exports, we're already using "Inconclusive" status for our Pending tests. Not a huge deal, but there will be some overlap there if you have both types of tests in a Pester suite.

Should we just rename -Pending to -Inconclusive, or have them mean the same thing behind the scenes?

@nohwnd
Copy link
Member

nohwnd commented Oct 14, 2015

@dlwyatt Rename Pending to Inconclusive in both the parameter and the screen output, and - Pending alias for the -Inconclusive param to maintain backwards compatibility?

@matt-richardson
Copy link

Ahh - looks like this can be closed now that we've got Set-TestInconclusive? Thanks @dlwyatt!

@dlwyatt
Copy link
Member

dlwyatt commented Feb 3, 2016

Yep. Odd that it didn't auto-close; GitHub usually does that when you merge a PR that references an issue.

@dlwyatt dlwyatt closed this as completed Feb 3, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants