eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Regression testing is an important step in the release process, to ensure that new code doesn't break the existing functionality. As the codebase evolves, we want to run these tests frequently to help catch any issues early on.

The best way to ensure these tests run frequently on an automated basis is, of course, to include them in the CI/CD pipeline. This way, the regression tests will execute automatically whenever we commit code to the repository.

In this tutorial, we'll see how to create regression tests using Selenium, and then include them in our pipeline using GitHub Actions:, to be run on the LambdaTest cloud grid:

>> How to Run Selenium Regression Tests With GitHub Actions

Course – LJB – NPI EA (cat = Core Java)
announcement - icon

Code your way through and build up a solid, practical foundation of Java:

>> Learn Java Basics

eBook – Java Streams – NPI (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

1. Overview

The Spliterator interface, introduced in Java 8, can traverse and partition sequences. It’s a base utility for Streams, especially parallel ones.

In this article, we’ll cover its usage, characteristics, methods and how to create our own custom implementations.

2. Spliterator API

2.1. tryAdvance

This is the main method used for stepping through a sequence. The method takes a Consumer that’s used to consume elements of the Spliterator one by one sequentially and returns false if there’re no elements to be traversed.

Here, we’ll look at how to use it to traverse and partition elements.

First, let’s assume that we’ve got an ArrayList with 35000 articles and that the Article class is defined as:

public class Article {
    private List<Author> listOfAuthors;
    private int id;
    private String name;
    
    // standard constructors/getters/setters
}

Now, let’s use Spliterator to process the list of articles and adds a suffix of “– published by Baeldung” to each article name:

@Test
public void givenAStreamOfArticles_whenProcessedInSequentiallyWithSpliterator_ProducessRightOutput() {
  // ...
}

First, let’s generate the articles:

public void givenAStreamOfArticles_whenProcessedInSequentiallyWithSpliterator_ProducessRightOutput() {
    List<Article> articles = Stream.generate(() -> new Article("Java"))
      .limit(35000)
      .collect(Collectors.toList());

    // ...
}

We have used Stream to generate 35000 articles. Next, let’s create a spliterator from this articles list and use the tryAdvance method to process the articles.

Spliterator<Article> spliterator = articles.spliterator();
while (spliterator.tryAdvance(article -> article.setName(article.getName()
    .concat("- published by Baeldung"))));

Here, our consumer is a simple function that adds a suffix to the article names.

Finally, we can do an assertion to verify if all articles were processed and their name was updated:

articles.forEach(article -> assertThat(article.getName()).isEqualTo("Java- published by Baeldung"));

Notice that this test case will execute successfully. All article names are already updated, and the new name is equal to Java- published by Baeldung.

Another key point is that we used the tryAdvance() method to process the next element.

2.2. trySplit

Next, let’s split Spliterators (hence the name) and process partitions independently.

The trySplit method tries to split it into two parts. Then the caller process elements, and finally, the returned instance will process the others, allowing the two to be processed in parallel.

We will generate our articles and spliterator as we did previously:

@Test
public void givenAStreamOfArticle_whenProcessedUsingTrySplit_thenSplitIntoEqualHalf() {
    List<Article> articles = Stream.generate(() -> new Article("Java"))
      .limit(35000)
      .collect(Collectors.toList());

    Spliterator<Article> split1 = articles.spliterator();

    // ...
}

Then we create our second spliterator by applying the trySplit method on the first one:

Spliterator<Article> split2 = split1.trySplit(); 
In the above code, split1.trySplit() attempts to split our 35000 articles in split1 into two equal-sized parts. It returns a new spliterator, which represents the second half of the original spliterator and assigns it to split2.

Now let’s check the example of using these two splits; let’s create two lists that will store the results processed by these spliterators:

List<Article> articlesListOne = new ArrayList<>(); 
List<Article> articlesListTwo = new ArrayList<>();

Let’s consume the articles:

split1.forEachRemaining(articlesListOne::add);
split2.forEachRemaining(articlesListTwo::add);

After creating the list, we iterate through split1 and add all the articles in split1 to articlesListOne. Similarly, we perform the same operation for split2, saving each article of split2 into articlesListTwo.

Next, we can assert that these spliterators consumed exactly half of the articles, i.e. 17500:

assertThat(articlesListOne.size()).isEqualTo(17500);
assertThat(articlesListTwo.size()).isEqualTo(17500);

Finally, we can make an assertion to verify that both lists contain distinct elements:

assertThat(articlesLitOne).doesNotContainAnyElementsOf(articlesListTwo);

Notice that this test case will execute successfully. As the articles that are present in the articlesSplitOne are not present in articlesSplitTwo. This concludes we can process the partitions independently.

The splitting process worked as intended and divided the records equally.

2.3. estimatedSize

The estimatedSize method gives us an estimated number of elements:

log.info("Size: " + split1.estimateSize());

This will output:

Size: 17500

2.4. hasCharacteristics

This API checks if the given characteristics match the properties of the Spliterator. Then if we invoke the method above, the output will be an int representation of those characteristics:

log.info("Characteristics: " + split1.characteristics());
Characteristics: 16464

3. Spliterator Characteristics

It has eight different characteristics that describe its behaviour. Those can be used as hints for external tools:

  • SIZED if it’s capable of returning an exact number of elements with the estimateSize() method
  • SORTED – if it’s iterating through a sorted source
  • SUBSIZED – if we split the instance using a trySplit() method and obtain Spliterators that are SIZED as well
  • CONCURRENT – if the source can be safely modified concurrently
  • DISTINCT – if for each pair of encountered elements x, y, !x.equals(y)
  • IMMUTABLE – if elements held by the source can’t be structurally modified
  • NONNULL – if the source holds nulls or not
  • ORDERED – if iterating over an ordered sequence

4. A Custom Spliterator

4.1. When to Customize

For the sake of this example we will present an easy example to understand how to write a custom Splitter. Using a custom splitter you will be able to transverse all the elements of the source one by one. The type of this array could be a custom model object in an array, an IO channel or a generator function.

4.2. How to Customize

Let’s assume that we would like to compute the sum of all elements in a large Integer array using a custom splitter. To solve that, we need to implement a Spliterator that splits the Integer list into sublists. Here’s the implementation of our custom Spliterator:

public class CustomSpliterator implements Spliterator<Integer> {
    private final List<Integer> elements;
    private int currentIndex;

    public CustomSpliterator(List<Integer> elements) {
        this.elements = elements;
        this.currentIndex = 0;
    }

    @Override
    public boolean tryAdvance(Consumer<? super Integer> action) {
        if (currentIndex < elements.size()) {
            action.accept(elements.get(currentIndex));
            currentIndex++;
            return true;
        }
        return false;
    }

    @Override
    public Spliterator<Integer> trySplit() {
        int currentSize = elements.size() - currentIndex;
        if (currentSize < 2) {
            return null;
        }

        int splitIndex = currentIndex + currentSize / 2;
        CustomSpliterator newSpliterator = new CustomSpliterator(elements.subList(currentIndex, splitIndex));
        currentIndex = splitIndex;
        return newSpliterator;
    }

    @Override
    public long estimateSize() {
        return elements.size() - currentIndex;
    }

    @Override
    public int characteristics() {
        return ORDERED | SIZED | SUBSIZED | NONNULL;
    }
}

Testing the CustomSpliterator processing the collection sequentially:

@Test
public void givenAStreamOfIntegers_whenProcessedSequentialCustomSpliterator_countProducesRightOutput() {
    List<Integer> numbers = new ArrayList<>();
    numbers.add(1);
    numbers.add(2);
    numbers.add(3);
    numbers.add(4);
    numbers.add(5);

    CustomSpliterator customSpliterator = new CustomSpliterator(numbers);
    AtomicInteger sum = new AtomicInteger();

    customSpliterator.forEachRemaining(sum::addAndGet);
    assertThat(sum.get()).isEqualTo(15);
}

Testing the CustomSpliterator processing the collection in parallel:

@Test
public void givenAStreamOfIntegers_whenProcessedInParallelWithCustomSpliterator_countProducesRightOutput() {
    List<Integer> numbers = new ArrayList<>();
    numbers.add(1);
    numbers.add(2);
    numbers.add(3);
    numbers.add(4);
    numbers.add(5);

    CustomSpliterator customSpliterator = new CustomSpliterator(numbers);

    // Create a ForkJoinPool for parallel processing
    ForkJoinPool forkJoinPool = ForkJoinPool.commonPool();
    AtomicInteger sum = new AtomicInteger(0);

    // Process elements in parallel using parallelStream
    forkJoinPool.submit(() -> {
        Stream parallelStream = StreamSupport.stream(customSpliterator, true); // true for parallel stream
        parallelStream.forEach(sum::addAndGet); // Process elements in parallel
    }).join();
    assertThat(sum.get()).isEqualTo(15);
}

By utilizing parallel processing, the elements are split into multiple parts and processed concurrently, potentially improving performance for large datasets or computationally intensive tasks.

Also, the custom Spliterator is created from a list of Integers and traverses through it by holding the current position.

Let’s discuss in more detail the implementation of each method:

  • The CustomSpliterator takes a list of integers in its constructor and tracks the current index being processed.
  • The tryAdvance() method is implemented to consume the next element available and return true if the next element exists and advances the current index. If there are no more elements, it will return false.
  • The trySplit() method will split the remaining elements into two parts. It creates a new CustomSpliterator with the sub list from the current index to the split index. If the remaining size is too small to split trySplit() will return null.
  • The estimateSize() method returns an estimate of the remaining number of elements to be processed.
  • The characteristics() method specifies the characteristics of the Spliterator. In this case, the ORDERED, SIZED, SUBSIZED, and NONNULL characteristics are set.

5. Support for Primitive Values

The Spliterator API supports primitive values including double, int and long.

The only difference between using a generic and a primitive dedicated Spliterator is the given Consumer and the type of the Spliterator.

For example, when we need it for an int value we need to pass an intConsumer. Furthermore, here’s a list of primitive dedicated Spliterators:

  • OfPrimitive<T, T_CONS, T_SPLITR extends Spliterator.OfPrimitive<T, T_CONS, T_SPLITR>>: parent interface for other primitives
  • OfInt: A Spliterator specialized for int
  • OfDouble: A Spliterator dedicated for double
  • OfLong: A Spliterator dedicated for long

6. Conclusion

In this article, we covered Java 8 Spliterator usage, methods, characteristics, splitting process, primitive support and how to customize it.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook – Java Streams – NPI (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook Jackson – NPI EA – 3 (cat = Jackson)