-
Notifications
You must be signed in to change notification settings - Fork 11
JUnit
Neodymium customizes JUnit execution to provide additional features like test data handling, multi browser support and the subsequent test multiplication. On this site we give some insights on how this works when using Neodymium.
The @NeodymiumTest
annotation can be used instead of @Test
on any test method that should be executed with Neodymium. The test class can contain both plain JUnit5 tests and Neodymium tests
public class DemoTest
{
@NeodymiumTest
public void ensureFunctionality()
{
// add test code
}
}
The NeodymiumRunner
can be used simply by annotating the test case class. Please see the example below:
@RunWith(NeodymiumRunner.class)
public class DemoTest
{
@Test
public void ensureFunctionality()
{
// add test code
}
}
With Neodymium, a test case can be executed with different test data sets thus enabling Data Driven Testing. Furthermore, Neodymium supports running a test cases with its data sets in different browsers. This is what we call the Neodymium multi browser support. These features can be combined with each other. While the multi browser support is activated by annotating classes or methods, mutiple data set usage will be triggered whenever a matching data file can be found. Please follow the links below for more details on these topics.
@Browser
@RandomBrowser
@SuppressBrowser
Please check the Multi browser support page for detailed information.
@DataSet
@SuppressDataSet
@DataFile
Please check the Test data provider page for detailed information.
In general Neodymium keeps JUnit's test execution order. More info for JUnit4 and for JUnit5. This means that by default there is no fixed order within the methods annotated with @Test
or @NeodymiumTest
.
The test methods are retrieved as an unordered list, but during computation, the collective tests generated by the multiplication of one test method, due to mutiple data sets and browser profiles (cross product of method, browsers and test data sets), are added to the list as one complete set. JUnit's method ordering is applied first, then the order of the used browsers as stated within the test, and finally the data sets are executed in the order they are listed within the data file or as specified by using the @DataSet
annotation. That means that while we can't determine when in the overall order a test will be executed, we can ensure that the same Neodymium test being executed multiple times will be reexecuted one after the other in a defined order until complete, before moving on the the next test.
As stated above, JUnit does not execute the computed test methods in any particular order. For cases where a fix and predictable execution order is desired/required, JUnit4's FixMethodOrder
annotation and JUnit5's TestMethodOrder
annotation can be used for Neodymium tests as well (see example below).
Given the following test class and assuming we execute it with a data file containing 3 data sets:
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
@Browser("Chrome_1400x1000")
@Browser("FF_1024x768")
public class HomePageTest
{
@BeforeAll
public void systemSetUp()
{
// Anything that should be happening, before the test class is initiated
}
@BeforeEach
public void setUp()
{
// Anything that should be happening, before each test method is starting
}
@Order(1)
@NeodymiumTest
public void a()
{
System.out.println("testA with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
// this overwrites the order of the data file
@DataSet(id = "dataSet3")
@DataSet(id = "dataSet2")
@DataSet(id = "dataSet1")
@Order(3)
@NeodymiumTest
public void b()
{
System.out.println("testB with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
// this overwrites the order of the test class
@Browser("FF_1024x768")
@Browser("Chrome_1400x1000")
@Order(2)
@NeodymiumTest
public void c()
{
System.out.println("testC with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@AfterEach
public void tearDown()
{
// Anything that should be happening, after each single test method
}
@AfterAll
public void systemTearDown()
{
// Anything that should be happening, once all test of this class have finished
}
}
The console output of the code above would then look like this:
testA with browser: 'Chrome_1400x1000' and data set: 'dataSet1'.
testA with browser: 'Chrome_1400x1000' and data set: 'dataSet2'.
testA with browser: 'Chrome_1400x1000' and data set: 'dataSet3'.
testA with browser: 'FF_1024x768' and data set: 'dataSet1'.
testA with browser: 'FF_1024x768' and data set: 'dataSet2'.
testA with browser: 'FF_1024x768' and data set: 'dataSet3'.
testC with browser: 'FF_1024x768' and data set: 'dataSet1'.
testC with browser: 'FF_1024x768' and data set: 'dataSet2'.
testC with browser: 'FF_1024x768' and data set: 'dataSet3'.
testC with browser: 'Chrome_1400x1000' and data set: 'dataSet1'.
testC with browser: 'Chrome_1400x1000' and data set: 'dataSet2'.
testC with browser: 'Chrome_1400x1000' and data set: 'dataSet3'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet3'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet2'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet1'.
testB with browser: 'FF_1024x768' and data set: 'dataSet3'.
testB with browser: 'FF_1024x768' and data set: 'dataSet2'.
testB with browser: 'FF_1024x768' and data set: 'dataSet1'.
Given the following test class and assuming we execute it with a data file containing 3 data sets:
@RunWith(NeodymiumRunner.class)
@Browser("Chrome_1024x768")
@Browser("Firefox_1024x768")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class OrderDemoTest
{
@BeforeClass
public void systemSetUp()
{
// Anything that should be happening, before the test class is initiated
}
@Before
public void setUp()
{
//Anything that should be happening, before each test method is starting
}
@Test
public void testC()
{
System.out.println("testC with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@Test
// this overwrites the order of the data file
@DataSet(id = "dataSet3")
@DataSet(id = "dataSet2")
@DataSet(id = "dataSet1")
public void testB()
{
System.out.println("testB with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@Test
// this overwrites the order of the test class
@Browser("Firefox_1024x768")
@Browser("Chrome_1024x768")
public void testA()
{
System.out.println("testA with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@After
public void tearDown()
{
//Anything that should be happening, after each single test method
}
@AfterClass
public void systemTearDown()
{
// Anything that should be happening, once all test of this class have finished
}
}
The console output of the code above would then look like this:
testA with browser: 'Firefox_1024x768' and data set: 'dataSet1'.
testA with browser: 'Firefox_1024x768' and data set: 'dataSet2'.
testA with browser: 'Firefox_1024x768' and data set: 'dataSet3'.
testA with browser: 'Chrome_1024x768' and data set: 'dataSet1'.
testA with browser: 'Chrome_1024x768' and data set: 'dataSet2'.
testA with browser: 'Chrome_1024x768' and data set: 'dataSet3'.
testB with browser: 'Chrome_1024x768' and data set: 'dataSet3'.
testB with browser: 'Chrome_1024x768' and data set: 'dataSet2'.
testB with browser: 'Chrome_1024x768' and data set: 'dataSet1'.
testB with browser: 'Firefox_1024x768' and data set: 'dataSet3'.
testB with browser: 'Firefox_1024x768' and data set: 'dataSet2'.
testB with browser: 'Firefox_1024x768' and data set: 'dataSet1'.
testC with browser: 'Chrome_1024x768' and data set: 'dataSet1'.
testC with browser: 'Chrome_1024x768' and data set: 'dataSet2'.
testC with browser: 'Chrome_1024x768' and data set: 'dataSet3'.
testC with browser: 'Firefox_1024x768' and data set: 'dataSet1'.
testC with browser: 'Firefox_1024x768' and data set: 'dataSet2'.
testC with browser: 'Firefox_1024x768' and data set: 'dataSet3'.
Neodymium and its example projects use Maven as execution environment. For test executions we use the Apache Maven Surefire plugin.
By default there is also no particular order for the execution of the test case classes. But you can specify the order by adding the runOrder
parameter to the Surefire configuration within the pom.xml
file. Please check the official documentation for more details.
Neodymium also offers an option to create so called suites of test. To do so you need to create a separate class for each suite and annotate it as follows:
@RunWith(Suite.class)
@Suite.SuiteClasses(
{
RegisterFromUserMenuTest.class, RegisterTest.class, LoginTest.class
})
public class UserTestSuite
{
@BeforeClass
public static void before()
{
System.out.println("before all tests in suite");
}
@AfterClass
public static void after()
{
System.out.println("after all tests in suite");
}
}
As you can see in the example, the suite class should be annotated with @RunWith(Suite.class)
and @Suite.SuiteClasses()
, the latter one should contain the test classes, which belong to the suite. All the test classes in the suite can be configured like a normal Neodymium test, meaning that they should contain @RunWith(NeodymiumRunner.class)
and browser configurations. If you use suites to run tests, you may want to exclude these same test from other test execution configurations since they would then be run twice unnecessarily.
Although this approach might be handy, be careful not to misuse this option, as it can increase the test execution time. This option might also be useful in case some of your tests need the same setup or cleanup. Please be aware that at the time when the methods annotated with @BeforeClass
or @AfterClass
are called, there is no browser and test data configuration available.
As there is no parallelization for tests run in a suite, you can use this option to avoid some collisions during a test run. It's also possible to use Maven Surefire Plugin configurations to avoid collisions. The Maven Surefire Plugin approach also works for JUnit5. If you are interested, please read about it in the chapterAvoiding collisions of test with Maven Surefire
To group the tests you can also use the @Tag
annotation for JUnit5 and @Category
annotation for JUnit4. This option is more flexible than suite grouping but doesn't allow the execution of code before or after the whole test group.
In the @Tag
annotation offered by JUnit5 you can simply use strings, e.g.:
@Tag("TestsWithFailures")
public class A {
@NeodymiumTest
public void a() {
fail();
}
@Tag("SlowTests")
@NeodymiumTest
public void b() {
}
@Tag("FastTests")
@Tag("SmokeTests")
@NeodymiumTest
public void c() {
}
}
For JUnit4 you first need to create a category, which is represented by a marker interface. So, to create a category for slow tests create the following interface:
public interface SlowTests {
}
Now you can use this interface to group tests in categories. Unlike the test suite grouping, where you had to mention all the suite tests in the ~TestSuite
class, here you just need to annotate the tests from the category with @Category(SlowTests.class)
.
High flexibility level of this annotation allows you to mark single test method as belonging to the category. The annotation is also inheritable, so in case the classes from the same category have mutual super class, you can simply annotate the latter one and avoid redundant code repetition.
It's also possible to make a test method or class belong to multiple categories with following annotation @Category({SlowTests.class, FastTests.class})
. In the example below you can see all the possible ways for tests categorization:
@Category(TestsWithFailures.class)
public class A {
@Test
public void a() {
fail();
}
@Category(SlowTests.class)
@Test
public void b() {
}
@Category({FastTests.class, SmokeTests.class})
@Test
public void c() {
}
}
For JUnit4 manipulations you can use @IncludeCategory
and @ExcludeCategory
annotations. Using these annotations you can include or exclude categories in test suites like in the example below:
@RunWith(Categories.class)
@IncludeCategory({FastTests.class, SmokeTests.class})
@SuiteClasses({A.class, B.class})
public static class FastOrSmokeTestSuite {
// Will run A.c and B.d, but not A.b because it is not annotated with either FastTests or SmokeTests
}
After you categorized the tests, you can operate with the category / tag name instead of an enumeration of test names. You can operate categories with maven-surefire-plugin
or JUnit
depending on what you need.
For manipulation with maven, use <groups>
and <excludedGroups>
tags. This may be handy when creating dynamic sets of tests that are easy to maintain. It allows you not to not have to list all the test that need to be executed (or skipped) in a profile by simply mentioning the tag and then annotating all the needed tests. It will surely decrease the maintenance effort for the profile.
<profile>
<id>critical</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<groups>Critical</groups>
<!-- for JUnit4 -->
<!-- <groups> zwilling.utility.Critical</groups> -->
</configuration>
</plugin>
</plugins>
</build>
</profile>
Important: JUnit5 also supports tags expression, which allow more flexible grouping of tests. Please, read more here
Sometimes it's required to run certain tests sequentially to avoid collision but at the same time executing all tests sequentially will increase execution time of the whole test suite. To solve this problem you can configure the execution of the tests in Maven Surefire so that e.g. first all the non-coliding tests are executed in parallel and afterwards, the test that might collide are executed sequentially. This configuration looks like the following:
<build>
<executions>
<execution>
<id>run-tests-parallel</id>
<phase>test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<forkCount>${surefire.forkCount}</forkCount>
<includes>
<include>tests/that/can/be/executed/parallel/**/*Test.java</include>
</includes>
<!-- or you can use <groups>category class or tag expression</groups> -->
<skipTests>false</skipTests>
</configuration>
</execution>
<execution>
<id>run-tests-serial</id>
<phase>test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<forkCount>1</forkCount>
<includes>
<include>tests/that/should/be/executed/sequentially/**/*Test.java</include>
</includes>
<!-- or you can use <groups>category class or tag expression</groups> -->
<skipTests>false</skipTests>
</configuration>
</execution>
</executions>
Overview
Neodymium features
- Neodymium configuration properties
- Neodymium context
- Utility classes
- Test data provider
- Test Environments
- Multi browser support
- Applitools Plugin
- Localization
- Highlight and Wait
- Advanced Screenshots
- Seperate Browser Sessions for Setup and Cleanup
Best practices and used frameworks
Special