Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’ll tackle the issue of high CPU usage in Java programs. We’ll look at potential root causes and how to troubleshoot such scenarios.

2. What Is Considered High CPU Usage

Before we proceed further, we have to define what we consider high CPU usage. After all, this metric depends on what the program is doing and can fluctuate much, even up to 100%.

For this article, we’ll consider the cases where something like the Windows Task Manager or the Unix/Linux top command shows a %CPU usage of 90-100% for extended periods, from minutes to hours. Additionally, this utilization should be unwarranted — in other words, the program shouldn’t be in the middle of intensive work.

3. Possible Root Causes

There are multiple potential root causes for high CPU load. We might have introduced some of these in our implementation, while others result from unexpected system state or utilization.

3.1. Implementation Errors

The first thing we should check for is possible infinite loops in our code. Due to how multithreading works, our program can still be responsive even in these cases.

A potential pitfall is a web app running on an application server (or servlet container like Tomcat, for example). Although we might not explicitly create new threads in our code, the application server handles each request in a separate thread. Because of this, even if some requests are stuck in a loop, the server continues to handle new requests properly. This can give us a false impression that things are running properly when, in reality, the application is underperforming and might even crash if enough threads end up blocked.

3.2. Poor Algorithms or Data Structures

Another possible implementation issue is the introduction of algorithms or data structures that either have bad performance or are incompatible with our specific use case.

Let’s look at a simple example:

List<Integer> generateList() {
    return IntStream.range(0, 10000000).parallel().map(IntUnaryOperator.identity()).collect(ArrayList::new, List::add, List::addAll);
}

We generate a simple List with 10.000.000 numbers using an ArrayList implementation.

Next, let’s access an entry of the list that’s located near the end:

List<Integer> list = generateList();
long start = System.nanoTime();
int value = list.get(9500000);
System.out.printf("Found value %d in %d nanos\n", value, (System.nanoTime() - start));

Since we’re using an ArrayList, index access is very fast, and we get a message indicating such:

Found value 9500000 in 49100 nanos

Let’s see what happens if the List implementation changes from ArrayList to LinkedList:

List<Integer> generateList() {
    return IntStream.range(0, 10000000).parallel().map(IntUnaryOperator.identity()).collect(LinkedList::new, List::add, List::addAll);
}

Running our program now reveals a much slower access time:

Found value 9500000 in 4825900 nanos

We can see that with just a small change, our program became 100 times slower.

Although we would never introduce such a change ourselves, it’s possible that another developer, who’s unaware of how we use generateList, does. Furthermore, we might not even own the generateList API implementation and, thus, have no control over it.

3.3. Large and Consecutive GC Cycles

There are also causes that are unrelated to our implementation and might even be outside of our control. One of them is large and consecutive garbage collections.

This depends on the type of system we’re working on and its usage. An example is a chatroom application where users receive a notification for each message posted. At a small scale, a naïve implementation will work fine.

However, if our application starts growing to millions of users where each one is a member of multiple rooms, the number and the rate of generated notification objects will increase dramatically. This can quickly saturate our heap and trigger stop-the-world garbage collections. While the JVM is cleaning up the heap, our system stops being responsive, which degrades the user experience.

4. Troubleshooting CPU Issues

As is evident from the examples above, troubleshooting these issues can’t always be done simply by inspecting or debugging code. There are, however, some tools we can use to get information on what’s happening with our program and what the culprit might be.

4.1. Using a Profiler

Using a profiler is always a valid and safe option. Whether GC cycles or infinite loops, a profiler will quickly point us to the hot code path.

There are many profilers on the market, both commercial and open-source. Java Flight Recorder, along with Java Mission Control and the Diagnostic Command Tool, make up a suite of tools that help us visually troubleshoot such issues.

4.2. Running Thread Analysis

If a profiler is unavailable, we can do some thread analysis to identify the culprit. There are different tools that we can use, depending on the host OS and environment, but in general, there are two steps:

  1. Use a tool that displays all running threads along with their PID and CPU percentage to identify the culprit thread.
  2. Use a JVM tool that displays all threads along with their current stack information to find the culprit PID.

One such tool is the Linux top command. If we use the top command, we get a view of the currently running processes and, among them, our Java process:

PID  USER       PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
3296 User       20   0 6162828   1.9g  25668 S 806.3  25.6   0:30.88 java

We note the PID value 3296. This view helps us identify high CPU usage from our program, but we need to dig further to find which of its threads are problematic.

Running top -H gives us a list of all running threads:

 PID USER       PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
3335 User       20   0 6162828   2.0g  26484 R  65.3  26.8   0:02.77 Thread-1
3298 User       20   0 6162828   2.0g  26484 R  64.7  26.8   0:02.94 GC Thread#0
3334 User       20   0 6162828   2.0g  26484 R  64.3  26.8   0:02.74 GC Thread#8
3327 User       20   0 6162828   2.0g  26484 R  64.0  26.8   0:02.93 GC Thread#3

We see multiple GC threads taking up CPU time along with one of our threads Thread-1 with PID 3335.

To get a thread dump, we can use jstack. If we run jstack -e 3296, we get our program’s thread dump. We can find Thread-1 either by using its name or its PID in hexadecimal:

"Thread-1" #13 prio=5 os_prio=0 cpu=9430.54ms elapsed=171.26s allocated=19256B defined_classes=0 tid=0x00007f673c188000 nid=0xd07 runnable  [0x00007f671c25c000]
   java.lang.Thread.State: RUNNABLE
        at com.baeldung.highcpu.Application.highCPUMethod(Application.java:40)
        at com.baeldung.highcpu.Application.lambda$main$1(Application.java:61)
        at com.baeldung.highcpu.Application$$Lambda$2/0x0000000840061040.run(Unknown Source)
        at java.lang.Thread.run([email protected]/Thread.java:829)

Note that the PID 3335 corresponds to 0xd07 in hexadecimal and is the thread’s nid value.

Using the stack information of the thread dump, we can now hone in on the problematic code and start fixing it.

5. Conclusion

In this article, we discussed potential root causes for high CPU usage in Java programs. We went through some examples and presented a few ways we can troubleshoot these scenarios.

As always, the source code for this article is available over on GitHub.

Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – REST with Spring (eBook) (everywhere)
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.