-
Notifications
You must be signed in to change notification settings - Fork 11
JUnit
Neodymium customizes JUnit execution to provide our features like test data handling, multi browser support and the consequential test multiplication. On this site we give some insights on how this works when using Neodymium.
The @NeodymiumTest
annotation can be used instead of @Test
on any test method that should be executed with Neodymium. The test class can contain both plain JUnit5 tests and Neodymium tests
public class DemoTest
{
@NeodymiumTest
public void ensureFunctionality()
{
// add test code
}
}
The NeodymiumRunner
can be used easily by annotating the test case class. Please see example below:
@RunWith(NeodymiumRunner.class)
public class DemoTest
{
@Test
public void ensureFunctionality()
{
// add test code
}
}
With Neodymium a test case can be executed with different test data sets and thus Data Driven Testing is possible. Furthermore, Neodymium offers support to run the test cases along to the data sets in different browsers. This is what we call the Neodymium multi browser support. This features can be combined with each other. While the multi browser support is activated by annotating classes or methods is the data set usage already active when a matching data file exists. Please follow the links below for more details on the topics.
@Browser
@RandomBrowser
@SuppressBrowser
Please check the [Multi browser support] page for detailed information.
@DataSet
@SuppressDataSet
@DataFile
Please check the [Test data provider] page for detailed information.
In general the test execution order of JUnit. More info for JUnit4 and for JUnit5. This means by default there is no fixed order within the methods annotated with @Test
or @NeodymiumTest
.
The test methods are retrieved as unordered list but while computing the test multiplication for test data sets and browsers they are added as complete sets (cross product of method, browsers and test data sets) for each method that is effected by Neodymium's annotations (see above). JUnit's method ordering is applied first. The order of the used browsers is the same as stated within the test. The data sets are executed in the order they are listed within the data file or as specified by using the @DataSet
annotation. The browsers are applied before the data sets.
As stated above, JUnit does not execute the computed test methods in any particular order. For cases where a fix and predictable execution order is desired/required, JUnit4's FixMethodOrder
annotation and JUnit5's TestMethodOrder
annotation can be used for Neodymium tests as well (see example below).
###Example JUnit5 Given the following test class and let's assume we execute it with a data file containing 3 data sets:
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
@Browser("Chrome_1400x1000")
@Browser("FF_1024x768")
public class HomePageTest
{
@Order(1)
@NeodymiumTest
public void a()
{
System.out.println("testB with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@DataSet(id = "dataSet3")
@DataSet(id = "dataSet2")
@DataSet(id = "dataSet1")
@Order(3)
@NeodymiumTest
public void b()
{
System.out.println("testB with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@Browser("FF_1024x768")
@Browser("Chrome_1400x1000")
@Order(1)
@NeodymiumTest
public void c()
{
System.out.println("testB with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
}
The console output of the code above would then look like this:
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet1'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet2'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet3'.
testB with browser: 'FF_1024x768' and data set: 'dataSet1'.
testB with browser: 'FF_1024x768' and data set: 'dataSet2'.
testB with browser: 'FF_1024x768' and data set: 'dataSet3'.
testB with browser: 'FF_1024x768' and data set: 'dataSet1'.
testB with browser: 'FF_1024x768' and data set: 'dataSet2'.
testB with browser: 'FF_1024x768' and data set: 'dataSet3'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet1'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet2'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet3'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet3'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet2'.
testB with browser: 'Chrome_1400x1000' and data set: 'dataSet1'.
testB with browser: 'FF_1024x768' and data set: 'dataSet3'.
testB with browser: 'FF_1024x768' and data set: 'dataSet2'.
testB with browser: 'FF_1024x768' and data set: 'dataSet1'.
###Example JUnit4 Given the following test class and let's assume we execute it with a data file containing 3 data sets:
@RunWith(NeodymiumRunner.class)
@Browser("Chrome_1024x768")
@Browser("Firefox_1024x768")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class OrderDemoTest
{
@Test
public void testC()
{
System.out.println("testC with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@Test
// this overwrites the order of the data file
@DataSet(id = "dataSet3")
@DataSet(id = "dataSet2")
@DataSet(id = "dataSet1")
public void testB()
{
System.out.println("testB with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
@Test
// this overwrites the order of the test class
@Browser("Firefox_1024x768")
@Browser("Chrome_1024x768")
public void testA()
{
System.out.println("testA with browser: '" + Neodymium.getBrowserProfileName() + "' and data set: '" + Neodymium.dataValue("testId") + "'.");
}
}
The console output of the code above would then look like this:
testA with browser: 'Firefox_1024x768' and data set: 'dataSet1'.
testA with browser: 'Firefox_1024x768' and data set: 'dataSet2'.
testA with browser: 'Firefox_1024x768' and data set: 'dataSet3'.
testA with browser: 'Chrome_1024x768' and data set: 'dataSet1'.
testA with browser: 'Chrome_1024x768' and data set: 'dataSet2'.
testA with browser: 'Chrome_1024x768' and data set: 'dataSet3'.
testB with browser: 'Chrome_1024x768' and data set: 'dataSet3'.
testB with browser: 'Chrome_1024x768' and data set: 'dataSet2'.
testB with browser: 'Chrome_1024x768' and data set: 'dataSet1'.
testB with browser: 'Firefox_1024x768' and data set: 'dataSet3'.
testB with browser: 'Firefox_1024x768' and data set: 'dataSet2'.
testB with browser: 'Firefox_1024x768' and data set: 'dataSet1'.
testC with browser: 'Chrome_1024x768' and data set: 'dataSet1'.
testC with browser: 'Chrome_1024x768' and data set: 'dataSet2'.
testC with browser: 'Chrome_1024x768' and data set: 'dataSet3'.
testC with browser: 'Firefox_1024x768' and data set: 'dataSet1'.
testC with browser: 'Firefox_1024x768' and data set: 'dataSet2'.
testC with browser: 'Firefox_1024x768' and data set: 'dataSet3'.
Neodymium and its example projects use Maven as execution environment. For test executions we use the Apache Maven Surefire plugin.
By default there is also no particular order for the execution of the test case classes. But you can specify the order by adding the runOrder
parameter to the Surefire configuration within the pom.xml
file. Please check the official documentation for more details.
Neodymium also offers an option to create so called suites of test. To do so you need to create a separate class for each suite and annotate it like following:
@RunWith(Suite.class)
@Suite.SuiteClasses(
{
RegisterFromUserMenuTest.class, RegisterTest.class, LoginTest.class,
})
public class UserTestSuite
{
}
@RunWith(Suite.class)
@Suite.SuiteClasses(
{
RegisterFromUserMenuTest.class, RegisterTest.class, LoginTest.class,
})
public class UserTestSuite
{
@BeforeClass
public static void before()
{
System.out.println("before all tests in suite");
}
@AfterClass
public static void after()
{
System.out.println("after all tests in suite");
}
}
as you can see on the example, the suite class should be annotated with @RunWith(Suite.class)
and @Suite.SuiteClasses()
, the latter one should contain the test, which belong to the suite. All the tests in suite should be configured like average Neodymium tests, meaning to contain @RunWith(NeodymiumRunner.class)
and browser configurations. Please make sure you excluded all in the suite mentioned tests from execution configurations to avoid this tests to run twice.
Although this approach might be handy, be careful with misusing this option, as it can increase the test execution time. This option might also be useful in case some of your tests need the same setup or cleanup. Please pay attention to the fact, that at the time when the methods annotated with @BeforeClass
or @AfterClass
, there is no browser and test data configuration available
As there is no parallelization for the tests in suite, you can use this option to avoid some collision in test run. It's also possible to use Maven Surefire Plugin configurations to avoid the collision. The Maven Surefire Plugin approach also works for JUnit5. If you are interested, please read about it in chapterAvoiding collisions of test with Maven Surefire.
To group the tests you can also use the @Tag
annotation for JUnit5 and @Category
annotation for JUnit4. This option is more flexible than suite grouping but doesn't allow to execute code before or after the whole test group.
In the @Tag
annotation offered by JUnit5 you can simply use strings, e.g.:
@Tag("TestsWithFailures")
public class A {
@NeodymiumTest
public void a() {
fail();
}
@Tag("SlowTests")
@NeodymiumTest
public void b() {
}
@Tag("FastTests")
@Tag("SmokeTests")
@NeodymiumTest
public void c() {
}
}
For JUnit4 you first need to create a category, which is represented by a marker interface. So, to create category for slow tests create the following interface:
public interface SlowTests {
}
Now you can use this interface to group tests in categories. Unlike the test suite grouping, where you had to mention all the suite tests in the ~TestSuite
class, here you just need to annotate the tests from the category with @Category(SlowTests.class)
.
High flexibility level of this annotation allows you to mark single test method as a belonging to the category. The annotation is also inheritable, so in case the classes from the same category have mutual super class, you can simply annotate the latter one and avoid redundant code repetition.
It's also possible to make a test method or class belong to multiple categories with following annotation @Category({SlowTests.class, FastTests.class})
. In the example below you can see all the possible ways for tests categorization:
@Category(TestsWithFailures.class)
public class A {
@Test
public void a() {
fail();
}
@Category(SlowTests.class)
@Test
public void b() {
}
@Category({FastTests.class, SmokeTests.class})
@Test
public void c() {
}
}
For JUnit4 manipulations you can use @IncludeCategory
and @ExcludeCategory
annotations. Using these annotations you can include or exclude categories in test suites like in the example below:
@RunWith(Categories.class)
@IncludeCategory({FastTests.class, SmokeTests.class})
@SuiteClasses({A.class, B.class})
public static class FastOrSmokeTestSuite {
// Will run A.c and B.d, but not A.b because it is not any of FastTests or SmokeTests
}
After you categorized the tests, you can operate with category / tag name instead of enumeration of test names. You can operate categories with maven-surefire-plugin
or JUnit
depending on what you need.
For manipulation with maven use <groups>
and <excludedGroups>
tags. This may be handy to create dynamic sets of tests that are easy to maintain. It allows you not to list all the test that need to be executed (or skipped) in profile but simply to mention the tag and then annotate all the needed tests. It will surely decrease the maintenance effort for the profile
<profile>
<id>critical</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<groups>Critical</groups>
<!-- for JUnit4 -->
<!-- <groups> zwilling.utility.Critical</groups> -->
</configuration>
</plugin>
</plugins>
</build>
</profile>
Important: JUnit5 also supports tags expression, which allow more flexible grouping of tests. Please, read more here
Sometimes it required to run certain tests sequentially to avoid collision but at the same time executing all tests sequential will increase execution time of the whole test suite. To solve this problem you can configure execution of the tests in Maven Surefire so that e.g. first all the non-coliding tests are executed in parallel and afterwards, the test that might collide are executed sequentially. This configuration looks like the following:
<build>
<executions>
<execution>
<id>run-tests-parallel</id>
<phase>test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<forkCount>${surefire.forkCount}</forkCount>
<includes>
<include>tests/that/can/be/executed/parallel/**/*Test.java</include>
</includes>
<!-- or you can use <groups>category class or tag expression</groups> -->
<skipTests>false</skipTests>
</configuration>
</execution>
<execution>
<id>run-tests-serial</id>
<phase>test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<forkCount>1</forkCount>
<includes>
<include>tests/that/should/be/executed/sequentially/**/*Test.java</include>
</includes>
<!-- or you can use <groups>category class or tag expression</groups> -->
<skipTests>false</skipTests>
</configuration>
</execution>
</executions>
Overview
Neodymium features
- Neodymium configuration properties
- Neodymium context
- Utility classes
- Test data provider
- Test Environments
- Multi browser support
- Applitools Plugin
- Localization
- Highlight and Wait
- Advanced Screenshots
- Seperate Browser Sessions for Setup and Cleanup
Best practices and used frameworks
Special