eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Regression testing is an important step in the release process, to ensure that new code doesn't break the existing functionality. As the codebase evolves, we want to run these tests frequently to help catch any issues early on.

The best way to ensure these tests run frequently on an automated basis is, of course, to include them in the CI/CD pipeline. This way, the regression tests will execute automatically whenever we commit code to the repository.

In this tutorial, we'll see how to create regression tests using Selenium, and then include them in our pipeline using GitHub Actions:, to be run on the LambdaTest cloud grid:

>> How to Run Selenium Regression Tests With GitHub Actions

Course – LJB – NPI EA (cat = Core Java)
announcement - icon

Code your way through and build up a solid, practical foundation of Java:

>> Learn Java Basics

1. Overview

In this tutorial, we’ll explore HashSet, one of the most popular Set implementations and an integral part of the Java Collections Framework.

2. Intro to HashSet

HashSet is one of the fundamental data structures in the Java Collections API.

Let’s recall the most important aspects of this implementation:

  • It stores unique elements and permits nulls
  • It’s backed by a HashMap
  • It doesn’t maintain insertion order
  • It’s not thread-safe

Note that this internal HashMap gets initialized when an instance of the HashSet is created:

public HashSet() {
    map = new HashMap<>();
}

If we want to go deeper into how the HashMap works, we can read the article focused on it here.

3. The API

In this section, we’re going to review the most commonly used methods and have a look at some simple examples.

3.1. add()

The add() method can be used for adding elements to a set. The method contract states that an element will be added only when it isn’t already present in a set. If an element was added, the method returns true, otherwise – false.

We can add an element to a HashSet like:

@Test
public void whenAddingElement_shouldAddElement() {
    Set<String> hashset = new HashSet<>();
 
    assertTrue(hashset.add("String Added"));
}

From an implementation perspective, the add method is extremely important. Implementation details illustrate how the HashSet works internally and leverages the HashMap’s put method:

public boolean add(E e) {
    return map.put(e, PRESENT) == null;
}

The map variable is a reference to the internal, backing HashMap:

private transient HashMap<E, Object> map;

It’d be a good idea to get familiar with the hashcode first to get a detailed understanding of how the elements are organized in hash-based data structures.

Summarizing:

  • A HashMap is an array of buckets with a default capacity of 16 elements – each bucket corresponds to a different hashcode value
  • If various objects have the same hashcode value, they get stored in a single bucket
  • If the load factor is reached, a new array gets created twice the size of the previous one and all elements get rehashed and redistributed among new corresponding buckets
  • To retrieve a value, we hash a key, mod it, and then go to a corresponding bucket and search through the potential linked list in case of there’s more than a one object

3.2. contains()

The purpose of the contains method is to check if an element is present in a given HashSet. It returns true if the element is found, otherwise false.

We can check for an element in the HashSet:

@Test
public void whenCheckingForElement_shouldSearchForElement() {
    Set<String> hashsetContains = new HashSet<>();
    hashsetContains.add("String Added");
 
    assertTrue(hashsetContains.contains("String Added"));
}

Whenever an object is passed to this method, the hash value gets calculated. Then, the corresponding bucket location gets resolved and traversed.

3.3. remove()

The method removes the specified element from the set if it’s present. This method returns true if a set contained the specified element.

Let’s see a working example:

@Test
public void whenRemovingElement_shouldRemoveElement() {
    Set<String> removeFromHashSet = new HashSet<>();
    removeFromHashSet.add("String Added");
 
    assertTrue(removeFromHashSet.remove("String Added"));
}

3.4. clear()

We use this method when we intend to remove all the items from a set. The underlying implementation simply clears all elements from the underlying HashMap.

Let’s see that in action:

@Test
public void whenClearingHashSet_shouldClearHashSet() {
    Set<String> clearHashSet = new HashSet<>();
    clearHashSet.add("String Added");
    clearHashSet.clear();
    
    assertTrue(clearHashSet.isEmpty());
}

3.5. size()

This is one of the fundamental methods in the API. It’s used heavily as it helps in identifying the number of elements present in the HashSet. The underlying implementation simply delegates the calculation to the HashMap’s size() method.

Let’s see that in action:

@Test
public void whenCheckingTheSizeOfHashSet_shouldReturnThesize() {
    Set<String> hashSetSize = new HashSet<>();
    hashSetSize.add("String Added");
    
    assertEquals(1, hashSetSize.size());
}

3.6. isEmpty()

We can use this method to figure if a given instance of a HashSet is empty or not. This method returns true if the set contains no elements:

@Test
public void whenCheckingForEmptyHashSet_shouldCheckForEmpty() {
    Set<String> emptyHashSet = new HashSet<>();
    
    assertTrue(emptyHashSet.isEmpty());
}

3.7. iterator()

The method returns an iterator over the elements in the Set. The elements are visited in no particular order and iterators are fail-fast.

We can observe the random iteration order here:

@Test
public void whenIteratingHashSet_shouldIterateHashSet() {
    Set<String> hashset = new HashSet<>();
    hashset.add("First");
    hashset.add("Second");
    hashset.add("Third");
    Iterator<String> itr = hashset.iterator();
    while(itr.hasNext()){
        System.out.println(itr.next());
    }
}

If the set is modified at any time after the iterator is created in any way except through the iterator’s own remove method, the Iterator throws a ConcurrentModificationException.

Let’s see that in action:

@Test(expected = ConcurrentModificationException.class)
public void whenModifyingHashSetWhileIterating_shouldThrowException() {
 
    Set<String> hashset = new HashSet<>();
    hashset.add("First");
    hashset.add("Second");
    hashset.add("Third");
    Iterator<String> itr = hashset.iterator();
    while (itr.hasNext()) {
        itr.next();
        hashset.remove("Second");
    }
}

Alternatively, had we used the iterator’s remove method, then we wouldn’t have encountered the exception:

@Test
public void whenRemovingElementUsingIterator_shouldRemoveElement() {
 
    Set<String> hashset = new HashSet<>();
    hashset.add("First");
    hashset.add("Second");
    hashset.add("Third");
    Iterator<String> itr = hashset.iterator();
    while (itr.hasNext()) {
        String element = itr.next();
        if (element.equals("Second"))
            itr.remove();
    }
 
    assertEquals(2, hashset.size());
}

The fail-fast behavior of an iterator can’t be guaranteed as it’s impossible to make any hard guarantees in the presence of unsynchronized concurrent modification.

Fail-fast iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it’d be wrong to write a program that depended on this exception for its correctness.

4. How to Convert HashSet to TreeSet With an Object Collection

We can convert a HashSet to a TreeSet using the class constructor TreeSet(Collection<? extends E> c). This creates a new TreeSet object containing the elements in the collection passed to it, sorted according to the elements’ natural ordering.

Let’s demonstrate with an example how to convert a HashSet comprised of elements, each of which is an object (instance of a class), to a TreeSet. We use an example class, Employee. This class should implement the Comparable interface to convert a HashSet with its elements derived from the class to a TreeSet. Accordingly, we override the compareTo(Object) method in Comparable to compare two Employee objects based on their IDs:

public class Employee implements Comparable {
    private int employeeId;
    private String employeeName;

    Employee(int employeeId, String employeeName) {
        this.employeeId = employeeId;
        this.employeeName = employeeName;
    }

    int getEmployeeId() {
        return employeeId;
    }

    public String getEmployeeName() {
        return employeeName;
    }

    @Override
    public String toString() {
        return employeeId + " " + employeeName;
    }

    @Override
    public int compareTo(Employee o) {
        if (this.employeeId == o.employeeId) {
            return 0;
        } else if (this.employeeId < o.employeeId) {
            return 1;
        } else {
            return -1;
        }
    }
}

Moreover, having defined an object class, let’s create a HashSet from it. Let’s use a JUnit 5 integration test that includes an assertDoesNotThrow() assertion to verify the conversion doesn’t throw an exception:

@Test
public void givenComparableObject_whenConvertingToTreeSet_thenNoExceptionThrown() {

    HashSet hashSet = new HashSet();
        
    hashSet.add(new Employee(3, "John"));
    hashSet.add(new Employee(5, "Mike"));
    hashSet.add(new Employee(2, "Bob"));
    hashSet.add(new Employee(1, "Tom"));
    hashSet.add(new Employee(4, "Johnny"));  
        
    assertDoesNotThrow(()->{
        TreeSet treeSet=new TreeSet(hashSet);
    });
}

The JUnit integration test should pass when we use the Employee class that implements the Comparable interface.

However, when we use an Employee class definition that doesn’t implement the Comparable interface to create a HashSet instance, converting it to TreeSet throws an exception. Again, we can use a JUnit 5 test to verify it:

@Test
public void givenNonComparableObject_whenConvertingToTreeSet_thenExceptionThrown() {

    HashSet hashSet = new HashSet();
        
    hashSet.add(new Employee(3, "John"));
    hashSet.add(new Employee(5, "Mike"));
    hashSet.add(new Employee(2, "Bob"));
    hashSet.add(new Employee(1, "Tom"));
    hashSet.add(new Employee(4, "Johnny"));  
        
    assertThrows(ClassCastException.class,() -> { 
        TreeSet treeSet = new TreeSet(hashSet); 
    });
}

This time, the conversion throws the ClassCastException.class exception type, and the assertThrows() test passes.

5. How Does HashSet Maintain Uniqueness?

When we put an object into a HashSet, it uses the object’s hashcode value to determine if an element is not already in the set.

Each hash code value corresponds to a certain bucket location which can contain various elements, for which the calculated hash value is the same. But two objects with the same hashCode might not be equal.

So, objects within the same bucket will be compared using the equals() method.

6. Performance of HashSet

The performance of a HashSet is affected mainly by two parameters – its Initial Capacity and the Load Factor.

The expected time complexity of adding an element to a set is O(1) which can drop to O(n) in the worst case scenario (only one bucket present) – therefore, it’s essential to maintain the right HashSet’s capacity.

An important note: since JDK 8, the worst case time complexity is O(log*n).

The load factor describes what is the maximum fill level, above which, a set will need to be resized.

We can also create a HashSet with custom values for initial capacity and load factor:

Set<String> hashset = new HashSet<>();
Set<String> hashset = new HashSet<>(20);
Set<String> hashset = new HashSet<>(20, 0.5f);

In the first case, the default values are used – the initial capacity of 16 and the load factor of 0.75. In the second, we override the default capacity and in the third one, we override both.

A low initial capacity reduces space complexity but increases the frequency of rehashing which is an expensive process.

On the other hand, a high initial capacity increases the cost of iteration and the initial memory consumption.

As a rule of thumb:

  • A high initial capacity is good for a large number of entries coupled with little to no iteration
  • A low initial capacity is good for a few entries with a lot of iteration

It’s, therefore, very important to strike the correct balance between the two. Usually, the default implementation is optimized and works just fine, should we feel the need to tune these parameters to suit the requirements, we need to do judiciously.

7. Conclusion

In this article, we outlined the utility of a HashSet, its purpose as well as its underlying working. We saw how efficient it is in terms of usability given its constant time performance and ability to avoid duplicates.

We studied some of the important methods from the API and how they can help us as a developer to use a HashSet to its potential.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)