Course – Black Friday 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Course – Black Friday 2025 – NPI (cat=Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

1. Introduction

The JVM ships with various options for garbage collection to support a variety of deployment options. With this, we get flexibility in choosing which garbage collector to use for our application.

By default, the JVM chooses the most appropriate garbage collector based on the class of the host computer. However, sometimes, our application experiences major GC-related bottlenecks requiring us to take more control over which algorithm is used. The question is, “how does one settle on a GC algorithm?”

In this article, we attempt to answer that question.

2. What Is a GC?

Java being a garbage-collected language, we are shielded from the burden of manually allocating and deallocating memory to applications. The whole chunk of memory allocated to a JVM process by the OS is called the heap. The JVM then breaks this heap into two groups called generations. This breakdown enables it to apply a variety of techniques for efficient memory management.

The young (Eden) generation is where newly created objects are allocated. It’s usually small (100-500MB) and also has two survivor spaces. The old generation is where older or aged objects are stored — these are typically long-lived objects. This space is much larger than the young generation.

The collector continuously tracks the fullness of the young generation and triggers minor collections during which live objects are moved to one of the survivor spaces and dead ones removed. If an object has survived a certain number of minor GCs, the collector moves it to the old generation. When the old space is considered full, a major GC happens and dead objects are removed from the old space.

During each of these GCs, there are stop-the-world phases during which nothing else happens — the application can’t service any requests. We call this pause time.

3. Variables to Consider

Much as GC shields us from manual memory management, it achieves this at a cost. We should aim to keep the GC runtime overhead as low as possible. There are several variables that can help us decide which collector would best serve our application needs. We’ll go over them in the remainder of this section.

3.1. Heap Size

This is the total amount of working memory allocated by the OS to the JVM. Theoretically, the larger the memory, the more objects can be kept before collection, leading to longer GC times. The minimum and maximum heap sizes can be set using -Xms=<n> and -Xmx=<m> command-line options.

3.2. Application Data Set Size

This is the total size of objects an application needs to keep in memory to work effectively. Since all new objects are loaded in the young generation space, this will definitely affect the maximum heap size and, hence, the GC time.

3.3. Number of CPUs

This is the number of cores the machine has available. This variable directly affects which algorithm we choose. Some are only efficient when there are multiple cores available, and the reverse is true for other algorithms.

3.4. Pause Time

The pause time is the duration during which the garbage collector stops the application to reclaim memory. This variable directly affects latency, so the goal is to limit the longest of these pauses.

3.5. Throughput

By this, we mean the time processes spend actually doing application work. The higher the application time vs. overhead time spent in doing GC work, the higher the throughput of the application.

3.6. Memory Footprint

This is the working memory used by a GC process. When a setup has limited memory or many processes, this variable may dictate scalability.

3.7. Promptness

This is the time between when an object becomes dead and when the memory it occupies is reclaimed. It’s related to the heap size. In theory, the larger the heap size, the lower the promptness as it will take longer to trigger collection.

3.8. Java Version

As new Java versions emerge, there are usually changes in the supported GC algorithms and also the default collector. We recommend starting off with the default collector as well as its default arguments. Tweaking each argument has varying effects depending on the chosen collector.

3.9. Latency

This is the responsiveness of an application. GC pauses affect this variable directly.

4. Garbage Collectors

Besides serial GC, all the other collectors are most effective when there’s more than one core available:

4.1. Serial GC

The serial collector uses a single thread to perform all the garbage collection work. It’s selected by default on certain small hardware and operating system configurations, or it can be explicitly enabled with the option -XX:+UseSerialGC.

Pros:

  • Without inter-thread communication overhead, it’s relatively efficient.
  • It’s suitable for client-class machines and embedded systems.
  • It’s suitable for applications with small datasets.
  • Even on multiprocessor hardware, if data sets are small (up to 100 MB), it can still be the most efficient.

Cons:

  • It’s not efficient for applications with large datasets.
  • It can’t take advantage of multiprocessor hardware.

4.2. Parallel/Throughput GC

This collector uses multiple threads to speed up garbage collection. In Java version 8 and earlier, it’s the default for server-class machines. We can override this default by using the -XX:+UseParallelGC option.

Pros:

  • It can take advantage of multiprocessor hardware.
  • It’s more efficient for larger data sets than serial GC.
  • It provides high overall throughput.
  • It attempts to minimize the memory footprint.

Cons:

  • Applications incur long pause times during stop-the-world operations.
  • It doesn’t scale well with heap size.

It’s best if we want more throughput and don’t care about pause time, as is the case with non-interactive apps like batch tasks, offline jobs, and web servers.

4.3. Concurrent Mark Sweep (CMS) GC

We consider CMS a mostly concurrent collector. This means it performs some expensive work concurrently with the application. It’s designed for low latency by eliminating the long pause associated with the full GC of parallel and serial collectors.

We can use the option -XX:+UseConcMarkSweepGC to enable the CMS collector. The core Java team deprecated it as of Java 9 and completely removed it in Java 14.

Pros:

  • It’s great for low latency applications as it minimizes pause time.
  • It scales relatively well with heap size.
  • It can take advantage of multiprocessor machines.

Cons:

  • It’s deprecated as of Java 9 and removed in Java 14.
  • It becomes relatively inefficient when data sets reach gigantic sizes or when collecting humongous heaps.
  • It requires the application to share resources with GC during concurrent phases.
  • There may be throughput issues as there’s more time spent overall in GC operations.
  • Overall, it uses more CPU time due to its mostly concurrent nature.

4.4. G1 (Garbage-First) GC

G1 uses multiple background GC threads to scan and clear the heap just like CMS. Actually, the core Java team designed G1 as an improvement over CMS, patching some of its weaknesses with additional strategies.

In addition to the incremental and concurrent collection, it tracks previous application behavior and GC pauses to achieve predictability. It then focuses on reclaiming space in the most efficient areas first — those mostly filled with garbage. We call it Garbage-First for this reason.

Since Java 9, G1 is the default collector for server-class machines. We can explicitly enable it by providing -XX:+UseG1GC on the command line.

Pros:

  • It’s very efficient with gigantic datasets.
  • It takes full advantage of multiprocessor machines.
  • It’s the most efficient in achieving pause time goals.

Cons:

  • It’s not the best when there are strict throughput goals.
  • It requires the application to share resources with GC during concurrent collections.

G1 works best for applications with very strict pause-time goals and a modest overall throughput, such as real-time applications like trading platforms or interactive graphics programs.

4.5. Z Garbage Collector (ZGC)

ZGC is a scalable low latency garbage collector. It manages to keep low pause times on even multi-terabyte heaps. It uses techniques including reference coloring, relocation, load barriers and remapping. It is a good fit for server applications, where large heaps are common and fast application response times are required.

It was introduced in Java 11 as an experimental GC implementation. We can explicitly enable it by providing -XX:+UnlockExperimentalVMOptions -XX:+UseZGC on the command line. For more detailed descriptions, please visit our article on Z Garbage Collector.

5. Conclusion

For many applications, the choice of the collector is never an issue, as the JVM default usually suffices. That means the application can perform well in the presence of garbage collection with pauses of acceptable frequency and duration. However, this isn’t the case for a large class of applications, especially those with humongous datasets, many threads, and high transaction rates.

In this article, we’ve explored the garbage collectors supported by the JVM. We’ve also looked at key variables that can help us choose the right collector for the needs of our application.

Course – Black Friday 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

Course – Black Friday 2025 – NPI (All)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

eBook Jackson – NPI EA – 3 (cat = Jackson)
2 Comments
Oldest
Newest
Inline Feedbacks
View all comments