Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Introduction

In this article, we’ll learn different approaches to finding duplicates in a List in Java.

Given a list of integers with duplicate elements, we’ll be finding the duplicate elements in it. For example, given the input list [1, 2, 3, 3, 4, 4, 5], the output List will be [3, 4].

2. Finding Duplicates Using Collections

In this section, we’ll discuss two ways of using Collections to extract duplicate elements present in a list.

2.1. Using the contains() Method of Set

Set in Java doesn’t contain duplicates. The contains() method in Set returns true only if the element is already present in it.

We’ll add elements to the Set if contains() returns false. Otherwise, we’ll add the element to the output list. The output list thus contains the duplicate elements:

List<Integer> listDuplicateUsingSet(List<Integer> list) {
    List<Integer> duplicates = new ArrayList<>();
    Set<Integer> set = new HashSet<>();
    for (Integer i : list) {
        if (set.contains(i)) {
            duplicates.add(i);
        } else {
            set.add(i);
        }
    }
    return duplicates;
}

Let’s write a test to check if the list duplicates contains only the duplicate elements:

@Test
void givenList_whenUsingSet_thenReturnDuplicateElements() {
    List<Integer> list = Arrays.asList(1, 2, 3, 3, 4, 4, 5);
    List<Integer> duplicates = listDuplicate.listDuplicateUsingSet(list);
    Assert.assertEquals(duplicates.size(), 2);
    Assert.assertEquals(duplicates.contains(3), true);
    Assert.assertEquals(duplicates.contains(4), true);
    Assert.assertEquals(duplicates.contains(1), false);
}

Here we see that the output list only contains two elements 3 and 4.

This approach takes O(n) time for n elements in a list and extra space of size n for the set.

2.2. Using a Map and Storing the Frequency of Elements

We can use a Map to store the frequency of each element and then add them to the output list only when the frequency of the element isn’t 1:

List<Integer> listDuplicateUsingMap(List<Integer> list) {
    List<Integer> duplicates = new ArrayList<>();
    Map<Integer, Integer> frequencyMap = new HashMap<>();
    for (Integer number : list) {
        frequencyMap.put(number, frequencyMap.getOrDefault(number, 0) + 1);
    }
    for (int number : frequencyMap.keySet()) {
        if (frequencyMap.get(number) != 1) {
            duplicates.add(number);
        }
    }
    return duplicates;
}

Let’s write a test to check if the list duplicates contain only duplicate elements:

@Test
void givenList_whenUsingFrequencyMap_thenReturnDuplicateElements() {
    List<Integer> list = Arrays.asList(1, 2, 3, 3, 4, 4, 5);
    List<Integer> duplicates = listDuplicate.listDuplicateUsingMap(list);
    Assert.assertEquals(duplicates.size(), 2);
    Assert.assertEquals(duplicates.contains(3), true);
    Assert.assertEquals(duplicates.contains(4), true);
    Assert.assertEquals(duplicates.contains(1), false);
}

Here we see that the output list contains only two elements, 3 and 4.

This approach takes O(n) time for n elements in a list and extra space of size n for the map.

3. Using Streams in Java 8

In this section, we’ll discuss three ways of using Streams to extract duplicate elements present in a list.

3.1. Using filter() and Set.add() Method

Set.add() adds the specified element to this set if it’s not already present. If this set already contains the element, the call leaves the set unchanged and returns false.

Here, we’ll use a Set and convert the list to a stream. The stream is added to the Set, and the duplicate elements are filtered and collected into List:

List<Integer> listDuplicateUsingFilterAndSetAdd(List<Integer> list) {
    Set<Integer> elements = new HashSet<Integer>();
    return list.stream()
      .filter(n -> !elements.add(n))
      .collect(Collectors.toList());
}

Let’s write a test to check if the list duplicates contain only duplicate elements:

@Test
void givenList_whenUsingFilterAndSetAdd_thenReturnDuplicateElements() {
    List<Integer> list = Arrays.asList(1, 2, 3, 3, 4, 4, 5);
    List<Integer> duplicates = listDuplicate.listDuplicateUsingFilterAndSetAdd(list);
    Assert.assertEquals(duplicates.size(), 2);
    Assert.assertEquals(duplicates.contains(3), true);
    Assert.assertEquals(duplicates.contains(4), true);
    Assert.assertEquals(duplicates.contains(1), false);
}

Here we see that the output elements contain only two elements, 3 and 4, as expected.

This approach using filter() with Set.add() is the fastest algorithm to find duplicate elements with O(n) time complexity and extra space of size n for the set.

3.2. Using Collections.frequency()

Collections.frequency() returns the number of elements in the specified collection, which is equal to a specified value. Here we’ll convert List to Stream and filter out only the elements that return a value greater than one from Collections.frequency().

We’ll collect these elements into Set to avoid repetitions and finally convert Set to List:

List<Integer> listDuplicateUsingCollectionsFrequency(List<Integer> list) {
    List<Integer> duplicates = new ArrayList<>();
    Set<Integer> set = list.stream()
      .filter(i -> Collections.frequency(list, i) > 1)
      .collect(Collectors.toSet());
    duplicates.addAll(set);
    return duplicates;
}

Let’s write a test to check if the duplicates contain only duplicate elements:

@Test
void givenList_whenUsingCollectionsFrequency_thenReturnDuplicateElements() {
    List<Integer> list = Arrays.asList(1, 2, 3, 3, 4, 4, 5);
    List<Integer> duplicates = listDuplicate.listDuplicateUsingCollectionsFrequency(list);
    Assert.assertEquals(duplicates.size(), 2);
    Assert.assertEquals(duplicates.contains(3), true);
    Assert.assertEquals(duplicates.contains(4), true);
    Assert.assertEquals(duplicates.contains(1), false);
}

As expected, the output list contains only two elements, 3 and 4.

This approach using Collections.frequency() is the slowest because it compares each element with a list – Collections.frequency(list, i) whose complexity is O(n). So the overall complexity is O(n*n). It also requires an extra space of size n for the set.

3.3. Using Map and Collectors.groupingBy()

Collectors.groupingBy() returns a collector implementing a cascaded “group by” operation on input elements.

It groups elements according to a classification function and then performs a reduction operation on the associated values with a given key using the specified downstream collector. The classification function maps elements to some key type K. The downstream collector operates on input elements and produces a result of type D. The resulting collector produces a Map<K, D>.

Here we’ll use Function.identity() as the classification function and Collectors.counting() as the downstream collector.

Function.identity() returns a function that always returns its input argument. Collectors.counting() returns a collector accepting elements that count the number of input elements. If no elements are present, the result is zero. Thus we’ll get a map of elements and their frequency using Collectors.groupingBy().

Then we convert the EntrySet of this Map into a Stream, filter out only the elements that have a value greater than 1, and collect them in a Set to avoid repetitions. Then the  Set is converted into a List: 

List<Integer> listDuplicateUsingMapAndCollectorsGroupingBy(List<Integer> list) {
    List<Integer> duplicates = new ArrayList<>();
    Set<Integer> set = list.stream()
      .collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
      .entrySet()
      .stream()
      .filter(m -> m.getValue() > 1)
      .map(Map.Entry::getKey)
      .collect(Collectors.toSet());
    duplicates.addAll(set);
    return duplicates;
}

Let’s write a test to check if the list duplicates contains only duplicate elements:

@Test
void givenList_whenUsingMapAndCollectorsGroupingBy_thenReturnDuplicateElements() {
    List<Integer> list = Arrays.asList(1, 2, 3, 3, 4, 4, 5);
    List<Integer> duplicates = listDuplicate.listDuplicateUsingCollectionsFrequency(list);
    Assert.assertEquals(duplicates.size(), 2);
    Assert.assertEquals(duplicates.contains(3), true);
    Assert.assertEquals(duplicates.contains(4), true);
    Assert.assertEquals(duplicates.contains(1), false);
}

Here we see that the output elements contain only two elements 3 and 4.

Collectors.groupingBy() takes O(n) time. A filter() operation is done on the resulting EntrySet, but the complexity remains O(n) as the map lookup time is O(1). It also requires an extra space of n for the set.

4. Conclusion

In this article, we learned about different ways of extracting duplicate elements from a List in Java.

We discussed approaches using Set and Map and their corresponding approaches using Stream. The code using Stream is far more declarative and conveys the intent of the code clearly without the need of external iterators.

As always, the complete code samples for this article can be found over on GitHub.

Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – REST with Spring (eBook) (everywhere)
Comments are closed on this article!