Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:
A Guide to Engine Test Kit in Junit 5
Last updated: November 22, 2025
1. Overview
The Engine Test Kit allows us to execute Test Plans and gather statistics and reports at varying levels of detail depending on our needs. For example, it provides us with an overview of how many tests pass and fail. Or, we can check if the individual test outcomes match what we expected.
In this tutorial, we’ll look at what JUnit 5s Engine Test Kit is and how we can use it in our applications.
2. Example Test Class
To start, we’ll need a test class that we want to run and gather information on. To make the information we gather interesting, we need tests that pass, fail, skip, and abort.
Let’s first define a small Object to use in our tests, a Display that has a Platform and a height:
public class Display {
private final Platform platform;
private final int height;
// standard constructor, getters and setters
}
Let’s then write the Platform enum:
public enum Platform {
DESKTOP,
MOBILE
}
We’ll use the Platform we’ve just defined to allow us to have tests that should only run for mobile displays. Now we’ve got our test Objects, let’s use them in a test class:
public class DisplayTest {
private final Display display = new Display(Platform.DESKTOP, 1000);
@Test
void whenCorrect_thenSucceeds() {
assertEquals(1000, display.getHeight());
}
@Test
void whenIncorrect_thenFails() {
assertEquals(500, display.getHeight());
}
@Test
@Disabled("Flakey test needs investigating")
void whenDisabled_thenSkips() {
assertEquals(999, display.getHeight());
}
@Test
void whenAssumptionsFail_thenAborts() {
assumeTrue(display.getPlatform() == Platform.MOBILE, "test only runs for mobile");
}
}
Here we’ve got a suite of four tests. The first will succeed as we’ve got the correct height. The second will fail as we’ve got the height wrong. The third will be skipped as it’s marked with the @Disabled annotation. Finally, the fourth will abort as it should only run for mobile Displays, and the Display under test is for desktops.
3. Verify the Test Engine
The Engine Test Kit gives us the ability to use the Test Engine we decide on. The default one in Junit 5 is junit-jupiter. However, it’s possible to write our own or to use others, such as junit-vintage, which is handy for running older versions of JUnit.
A good way to start off our tests is to confirm we can find the Test Engine we would like to use. We can then also check that the Test Engine picks up our Test Plan. Let’s write a test that does both of those tasks:
@Test
void givenJunitJupiterEngine_whenRunningTestSuite_thenTestsAreDiscovered() {
EngineDiscoveryResults results = EngineTestKit.engine("junit-jupiter")
.selectors(selectClass(DisplayTest.class))
.discover();
assertEquals(emptyList(), results.getDiscoveryIssues());
}
In this test, we’ve started by specifying the name of the Test Engine we want to use, “junit-jupiter“. Then we’ve passed in our test class, and finally called discover(). Our assertion on the final line confirms that there were no issues with the discovery. Reviewing any issues that do pop up here is a good way to debug anything that’s gone wrong.
4. Collecting High-Level Test Statistics
So now we’ve got a Test Plan, and we’ve verified that our selected Test Engine is available and will detect our tests. It’s time to run our tests and gather up some high-level statistics:
@Test
void givenTestSuite_whenRunningAllTests_thenCollectHighLevelStats() {
EngineTestKit
.engine("junit-jupiter")
.selectors(selectClass(DisplayTest.class))
.execute()
.testEvents()
.assertStatistics(stats ->
stats.started(3).finished(3).succeeded(1).failed(1).skipped(1).aborted(1));
}
This test starts similarly to the last one; we specify our Test Engine and our test class. Following on from that setup, we call execute() to run the tests. After the run, we assert all our expected statistics. Here we can see that we got a single instance of a success, failure, skip, and abort as expected.
There are other statistics we could have checked. For example, dynamicallyRegistered() would check the number of dynamic registration events that occurred during the Test Plan.
5. Collecting Test-Specific Events
Next, let’s look at how we can drill down even further into the results of our Test Plan.
5.1. Verifying the Reason for an Aborted Test
We know that we’re expecting one of our tests to be aborted. Before, we were able to assert that at least one test was aborted. However, it’s essential to understand that the test is aborting correctly, and that it’s doing so for the expected reason. We can use the Engine Test Kit to make sure of that by collecting the Events from the test run:
Events testEvents = EngineTestKit
.engine("junit-jupiter")
.selectors(selectMethod(DisplayTest.class, "aborts"))
.execute()
.testEvents();
In the code here, we’ve specified our Test Engine like usual. We’ve then selected the specific method aborts() from our DisplayTest class that we want to be looking at. To wrap up, we then run the test and ask for the Events to be returned. With our Events collected, we can now check that the abort happened and that the reason is what we expected:
testEvents.assertThatEvents()
.haveExactly(1, event(test("aborts"),
abortedWithReason(instanceOf(TestAbortedException.class),
message(message -> message.contains("test only runs for mobile")))));
In this section of the test, we are asserting three things. Firstly, we check that there’s only one event, as we only expected to run one test; this effectively verifies that we set things up right. We then use abortedWithReason() to check that we did actually abort the test and that the message is what we expected; the test only runs for mobile displays.
5.2. Verifying Reason For Failed Test
Let’s look at verifying a test failed as expected, and for the reason we expected. To start as before, we’ll select a test engine, our fails() test, and collect the Events:
Events testEvents = EngineTestKit
.engine("junit-jupiter")
.selectors(selectMethod(DisplayTest.class, "fails"))
.execute()
.testEvents();
This time, we’ll check for two things: firstly, that we have a single Event for the single test. Secondly, we’ll use the finishedWithFailure() method to check that the test did fail as expected and the error that caused the failure is the right one:
testEvents.assertThatEvents()
.haveExactly(1, event(test("fails"),
finishedWithFailure(instanceOf(AssertionFailedError.class))));
We can be confident that the right test failed for the right reason using these assertions. Of course, we could be checking for any Error or Exception type here.
Alongside the abortedWithReason() and finishedWithFailure() methods we’ve used here, there’s a whole suite of methods for asserting that tests run or don’t run as expected. For example skippedWithReason(), finishedSuccessfully() and finishedWithCause().
6. Conclusion
In this tutorial, we saw how the Engine Test Kit allows us to run a suite of tests. We tracked high-level statistics, such as the number of tests run, and measured the number of tests that passed, failed, skipped, and aborted. However, we could have tracked a range of other test outcomes.
We then went deeper and wrote tests to assert why a test was aborted or failed. This was helpful for knowing things aren’t changing and new failure reasons aren’t coming up.
As always, the full code for the examples is available over on GitHub.















