eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Regression testing is an important step in the release process, to ensure that new code doesn't break the existing functionality. As the codebase evolves, we want to run these tests frequently to help catch any issues early on.

The best way to ensure these tests run frequently on an automated basis is, of course, to include them in the CI/CD pipeline. This way, the regression tests will execute automatically whenever we commit code to the repository.

In this tutorial, we'll see how to create regression tests using Selenium, and then include them in our pipeline using GitHub Actions:, to be run on the LambdaTest cloud grid:

>> How to Run Selenium Regression Tests With GitHub Actions

Course – LJB – NPI EA (cat = Core Java)
announcement - icon

Code your way through and build up a solid, practical foundation of Java:

>> Learn Java Basics

1. Overview

In this tutorial, we’ll learn how to address a rare case where we need to ensure element uniqueness when querying their indexes. The most reasonable idea is to use what already exists in classes, but it might not be the best option, and we’ll learn why.

Overall, the approaches presented in this article should be tailored to the specific task, taking into account their pros and cons.

2. LinkedHashSet

When we’re looking for a data structure that ensures uniqueness and maintains element order, LinkedHashSet is often the first choice. Also, we can iterate over it to find the position of a given element:

public static <E> int getIndex(LinkedHashSet<E> set, E element) {
    int index = 0;
    for (E current : set) {
        if (current.equals(element)) {
            return index;
        }
        index++;
    }
    return -1;
}

Another option is to get an iterator from LinkedHashSet, but, in general, the approach won’t change; we make the iteration process more explicit:

public static <E> int getIndexUsingIterator(LinkedHashSet<E> set, E element) {
    Iterator<E> iterator = set.iterator();
    int index = 0;
    while (iterator.hasNext()) {
        if (iterator.next().equals(element)) {
            return index;
        }
        index++;
    }
    return -1;
}

The main benefit is that it does almost everything out of the box. However, there’s a minor problem: we don’t have a way to get an element by its index in constant time. This operation would be linear, so if we do many lookups, it could dramatically affect our code’s performance. At the same time, if the size of the LinkedHashSet doesn’t grow much and lookups are rare compared to additions, this approach might be the one we should consider.

3. Conversion

As an option, we can create a special data structure for lookups, for example, by converting the LinkedHashSet to a List, or an array:

public static <E> int getIndexByConversion(Set<E> set, E element) {
    List<E> list = new ArrayList<>(set);
    return list.indexOf(element);
}

In some cases, this approach is assumed to be more performant. The confusion stems from the fact that we’re using a simple indexOf() method, which leads us to assume it somehow optimizes the process. Actually, it performs the same iteration as we did previously:

public static <E> E getElementByIndex(Set<E> set, int index) {
    List<E> list = new ArrayList<>(set);
    if (index >= 0 && index < list.size()) {
        return list.get(index);
    }
    throw new IndexOutOfBoundsException("Index: " + index + ", Size: " + list.size());
}

Converting a LinkedHashSet to an array is possible, but requires a few additional type-casting steps:

@SuppressWarnings("unchecked")
public static <E> E[] convertToArray(Set<E> set, Class<E> clazz) {
    return set.toArray((E[]) java.lang.reflect.Array.newInstance(clazz, set.size()));
}

public static <E> int getIndexByArray(Set<E> set, E element, Class<E> clazz) {
    E[] array = convertToArray(set, clazz);
    for (int i = 0; i < array.length; i++) {
        if (array[i].equals(element)) {
            return i;
        }
    }
    return -1;
}

In general, this approach doesn’t make sense for a simple reason: the indexOf() method performs an exact linear search, as we did explicitly in the first example. Additionally, we create a new data structure, doubling the space required for this solution. However, if we want to find an element by its index, we can achieve constant-time lookups. Thus, we need to know which operations we would like to prioritize to pick an approach.

4. List and Set

We can change our perspective a bit and separate these two requirements, uniqueness and order maintenance. Thus, we can use two separate structures: List and Set. The List would maintain the order and the Set uniqueness, so every time we want to add an element, we would check it with the  set first and only after that insert it into the List:

public class ListAndSetApproach<E> {
    private final List<E> list;
    private final Set<E> set;

    public ListAndSetApproach() {
        this.list = new ArrayList<>();
        this.set = new HashSet<>();
    }

    public boolean add(E element) {
        if (set.add(element)) {
            list.add(element);
            return true;
        }
        return false;
    }

    public int indexOf(E element) {
        return list.indexOf(element);
    }

    // Other methods
}

However, we should ensure that we remove elements correctly. In the previous approach, it was done transparently. In this case, we have to move the element to keep the order manually:

public class ListAndSetApproach<E> {

    // Initialization, fields, and a constructor

    public boolean remove(E element) {
        if (set.remove(element)) {
            list.remove(element);
            return true;
        }
        return false;
    }

   // Other methods
}

Even with all these modifications and two data structures, the lookup remains linear. Thus, we have the same problem we had previously. While we can add caching, it won’t change much, since the lookup and the creation of a data structure would have the same time complexity. The only way to optimize our solution is to create a custom implementation to address all our requirements.

However, the same is true about this approach as for the previous one. The lookups by index would run in constant time, so if we want to optimize these operations, it might be a good solution:

public class ListAndSetApproach<E> {

    // Initialization, fields, and a constructor

    public E get(int index) {
        return list.get(index);
    }

   // Other methods
}

By combining two data structures, we can incorporate their features. In this case, we maintain the uniqueness with a Set, and order with a List.

5. Custom Implementation

To make the lookups constant, we can use Maps, but it would require a more complex mapping. Thus, the best approach is to encapsulate the logic into a separate class. When we connect multiple data structures by convention, for example, if we remove elements from the first one but forget to do the same for the second, we expose ourselves to many issues. Also, this approach helps us to reveal only the functionality we care about, limiting the interaction options:

public class IndexAwareSetWithTwoMaps<E> {
    private final Map<E, Integer> elementToIndex;
    private final Map<Integer, E> indexToElement;
    private int nextIndex;

    public IndexAwareSetWithTwoMaps() {
        this.elementToIndex = new HashMap<>();
        this.indexToElement = new HashMap<>();
        this.nextIndex = 0;
    }

    public boolean add(E element) {
        if (elementToIndex.containsKey(element)) {
            return false;
        }
        elementToIndex.put(element, nextIndex);
        indexToElement.put(nextIndex, element);
        nextIndex++;
        return true;
    }

    public boolean remove(E element) {
        Integer index = elementToIndex.get(element);
        if (index == null) {
            return false;
        }
        
        elementToIndex.remove(element);
        indexToElement.remove(index);
        
        for (int i = index + 1; i < nextIndex; i++) {
            E elementAtI = indexToElement.get(i);
            if (elementAtI != null) {
                indexToElement.remove(i);
                elementToIndex.put(elementAtI, i - 1);
                indexToElement.put(i - 1, elementAtI);
            }
        }
        
        nextIndex--;
        return true;
    }

    public int indexOf(E element) {
        return elementToIndex.getOrDefault(element, -1);
    }

    public E get(int index) {
        if (index < 0 || index >= nextIndex) {
            throw new IndexOutOfBoundsException("Index: " + index + ", Size: " + nextIndex);
        }
        return indexToElement.get(index);
    }
}

This way, we can also simplify the solution by wrapping a LinkedHashSet in a custom class. Even if we’re using a single data structure, it helps us to hide caching or change the implementation in the future. For example, we can start with a simple conversion and then, as performance impacts our application, make the necessary improvements without changing the rest of the application.

6. Complexity Overview

All the listed approaches have pros and cons. To pick the best-suited one, we should know which operations would dominate, the size of the data set we expect to work with, and the rate of change in the data. We can use the following table to decide on the approach:

Approach add remove indexOf get (by index)
LinkedHashSet (iterate for index) O(1) O(1) O(n) (iteration) N/A
Convert to List/array on demand O(n) (rebuild) O(n) (rebuild) O(n) (linear search) O(n) (rebuild dominates)
List + Set O(1) O(n) (list removal) O(n) (list search) O(1)
Custom (two maps, reindex on remove) O(1) O(n) (shift indices) O(1) O(1)

In most cases, the custom approach would perform best, since most operations would take constant time, except for removal.

7. Multithreading

When these data structures are accessed from multiple threads, correctness becomes just as important as time complexity. A solution that works perfectly in a single-threaded context may easily break under concurrent access if updates are not coordinated properly. If we use a single data structure, using concurrent collections can help. Java provides thread-safe implementations that protect individual operations and reduce the risk of race conditions.

Suppose we have an approach that relies on two collections working together. Even if both collections are concurrent, their operations are not synchronized. Compound actions such as checking uniqueness in one structure and then updating another must still be executed atomically to keep the data consistent.

The simplest and most reliable way to ensure this consistency is to use the synchronized keyword. By synchronizing the relevant methods or code blocks, we can guarantee that multi-step operations are performed as a single unit, preserving correctness with minimal complexity. In high-contention applications, this straightforward approach may not scale well. In such scenarios, it is worth exploring more advanced concurrency techniques and possibly rethinking the overall design to reduce shared mutable state.

8. Conclusion

In this article, we’ve discussed different ways to optimize the indexOf method. In some cases, the functionality of a single data structure isn’t enough for our goals. Thus, we would like to combine them. All approaches have pros and cons and are highly dependent on the context in which the code will run. Therefore, there’s no “best” solution. However, the most flexible approach is to wrap the code in a dedicated class. This makes it easier to change or improve the implementation when necessary.

As usual, all the code from this tutorial is available over on GitHub.

Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)
guest
0 Comments
Oldest
Newest
Inline Feedbacks
View all comments