Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A spike to concrete discussion on how to arrange tests #1803

Closed
wants to merge 1 commit into from
Closed

A spike to concrete discussion on how to arrange tests #1803

wants to merge 1 commit into from

Conversation

veikkoeeva
Copy link
Contributor

Spike started from #1682. Personally I would like to have a direction on how to arrange relational persistence tests. Even if only one and making more work on other PRs to improve test coverage. This is provided as a more concrete piece for discussion to suss out concrete ideas.

Goals

  1. Make it more clear, I hope, on how to setup the test environment before a certain type of tests is being run. Here is done by a specific class TestEnvironmentInvariant. It has specific invariant methods, such as EnsureSqlServerMembershipEnvironment that can be called prior to running a certain type of tests to ensure the test environment invariants hold, i.e. it the prerequisites for the given type of tests are set up correctly. It uses information from TestEnvironmentInformation to check in environment specific way. As one can guess from this, the invariants are checked in a rather granular way, more on reasons shortly. One should be able to choose granularity by writing a suitably named function and a corresponding implementation.
  2. Make the environment and test class combinations visible. The comment here has that the connection string could be had in environment specific way from anywhere, even from Key Vault. This example setup is simplified and shows just a development machine setup, but I believe could be expanded to handle any situation (and the logic can reuse internal setup logic).
  3. Make it possible to run any combination of tests in any environment. As for an example for relational, it looks that this way would allow one to use LocalDb if choosing testing on hosted builds (e.g. LocalDb on VSTS) and/or SQL Server Azure and MySQL deployed on Azure in some other builds. Then either or both on development machine builds.
  4. Allow the various types of tests excute in parallel. This might make way to arrange baseline tests being ran on every build, at leas on every merge on master branch, quickly for feedback across backend implementations. Maybe even stress testing in parallel.
  5. The type of tests could be defined like this. A specific type of implementation exercising them could look like this. That is, it delegates the parameters to tests that all the implementations share. Instance data can be stored (in the common tests class) and specific backend types (like SQL Server) can define extra tests.

/// <summary>
/// Enumeration of the supported test environment configurations.
/// </summary>
public enum SupportedTestConfigurations
Copy link
Contributor Author

@veikkoeeva veikkoeeva Jun 1, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussion picked from another PR:

jdom

Hmmm, this is something we should not have... I might want to run a different combination of tests regardless of where it is actually running...

veikkoeeva

This is not used to select a combination of tests. This enumerates the platforms where tests are assumed to be run because the enumeration value is easier to pass around. It is used like [here](https://github.com//pull/1682/files#diff-4b8eb4bc30b3f20f1d850ea8cbdf9fc5R57) in the switch. The functionality that detects the correction enumeration value is [harcoded](https://github.com//pull/1682/files#diff-a0630db3acf1d2e9ce2c4751bb908aecR29) currently, because I didn't want to go all the way before asking feedback.

In reality the implementation could check, for instance, if JENKINS_HOME is set (and that it is not in VSTS) and so determine the test suite is being run in GitHub Jenkins and check invariants hold accordingly -- like that what tests to run and what to skip (i.e. MySQL installed, run them, skip Azure Emulator ones, as for an example, other combinations would be OK too). For VSTS this could check path variables too and then when checking environment invariants hold, set a hosted LocalDb build -- which is a bit involved. Or that in VSTS one could get connection strings from Key Vault, or prepare for more and different tests to be run, in the same suite.

I know this is asking a bit of a imagination here as this is not ready. I'm seeking for opinions if this would like an OK direction to take without actually using more time and going all the way down this path. What I think is that it would be beneficial to be able to run a broad set of tests in all the environments and it looks to me it makes sense to make the invariants checks, or "prepare the system" explicit as-per platform. It looks also that doing it this way allows one to run tests in parallel.

Some of this could be done to some extent in test class constructors and inheritance, but it would kind of mean tests are setup or that it is not that easy to move invariant checks (or otherwise check and prepare) in different levels of sharing.

jdom

Regarding the environment enumeration, I would not go with that approach. I would instead allow for certain fallbacks, but not make it named-environment dependant. We already have such a (very rough) implementation of fallback with StorageTestConstants. I don't like that approach in particular, and it could be improved a lot, but still I would like that for people who have their environments set up correctly, that those tests run, regardless of a value he uses in the enumeration (because he might have SQL connection strings, but not the azure storage ones, and each fixture should pick up correctly regardless of the other, and not be bound to the same enum value).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jdom I moved the discussion here.

I think we are in on the same page here. I don't know what would be a suitable arrangement. Some questions we could ponder:

a. How many environments there would need to be enumerated?
b. What kind of testing would we support both for Orleans project and perhaps other people running Orleans test builds in their rig?

I use connection strings as an example here, but they could be any other operation that needs to be performed. What would be a suitable way to pick the right connection string depending on the environment? I can think of having them in a configuration file and then supply a right one (or transform one, say, with XDT) to be read when the tests start. But it still seems one needs to have them somewhere dependent on environment. A tacit assumption here is that on Windows one has VS and hence LocalDb installed, likely also Azure Storage Emulator and their default connection strings work. If they don't work, then either the precondition or fixture check fail and tests aren't run. This can happen regardless of the test environment detection, but the connection strings need to be picked from somewhere.

Then one issue is that there might be need for environment dependent setup (like setting environment variables on hosted LocalDb, but this particular need might not be a real one).

What would be a fallback and how would it work? Would the fallback be "development environment" and then detect if LocalDb (SQL Server), MySQL, ZooKeeper and Azure Storage Emulator are installed using default connection strings and trying to connect to them? That might be a sensible fallback if all else fails, but how to choose the necessary settings and perform the necessary setup on other environments? I'm not sure if there is diverge now and if not, should some precaution to be taken to prepare for likely situations.

(Also, it may be evident, but the code in the spike isn't that polished.)

@veikkoeeva
Copy link
Contributor Author

I found a question in SO someone asking on how to run backend tests in parallel using theories. Might be of interest.

@sergeybykov
Copy link
Contributor

@dotnet-bot test this please

@veikkoeeva
Copy link
Contributor Author

@sergeybykov I could probably close this. I have an improved implementation over at eac23b5. I'm at peace if the approach is too "my way" and if so better deemed, can be excluded now that I'm reasonably sure the tests show there isn't blatantant bugs. One of the reasons I went with this route that I didn't find a clean way to implement the tests (so they'd run concurrently) and thought to another run at it.

I add some more implementation notes (some on Gitter) and think how to best address the grainType version issue (from #1897). I think in the next sixteen hours (I'll go to bed soon). As for the tests, the goals are the same as here, the idea, I think, isn't radical but rather than making test classes initialize their state and invariants and separate the environment specific parts elsewhere and then if needed, load settings from a well known custom file or from a custom file pointed by an environment variable. Also the core tests are more decoupled of any testing infrastructure and for example, in addition to the core tests, the relational tests assert something common to all relational and then even vendor specific asserts can be made too (it might not be assert all things similarly).

The tests look like this, ensuring the environment looks like this and this how the tests get their "access to environment". If you give it a go, those test should skip quickly if, for instance, MySQL can't be reached and set up.

@sergeybykov
Copy link
Contributor

@veikkoeeva With #1682 this one isn't needed, right? If so, let's close it then.

@veikkoeeva veikkoeeva closed this Jul 21, 2016
@veikkoeeva veikkoeeva deleted the testspike branch December 30, 2016 20:15
@github-actions github-actions bot locked and limited conversation to collaborators Dec 13, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants