Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Overview

In this article, we’ll take a quick look at Project Loom. In essence, the primary goal of Project Loom is to support a high-throughput, lightweight concurrency model in Java.

2. Project Loom

Project Loom is an attempt by the OpenJDK community to introduce a lightweight concurrency construct to Java. The prototypes for Loom so far have introduced a change in the JVM as well as the Java library.

Although there is no scheduled release for Loom yet, we can access the recent prototypes on Project Loom’s wiki.

Before we discuss the various concepts of Loom, let’s discuss the current concurrency model in Java.

3. Java’s Concurrency Model

Presently, Thread represents the core abstraction of concurrency in Java. This abstraction, along with other concurrent APIs makes it easy to write concurrent applications.

However, since Java uses the OS kernel threads for the implementation, it fails to meet today’s requirement of concurrency. There are two major problems in particular:

  1. Threads cannot match the scale of the domain’s unit of concurrency. For example, applications usually allow up to millions of transactions, users or sessions. However, the number of threads supported by the kernel is much less. Thus, a Thread for every user, transaction, or session is often not feasible.
  2. Most concurrent applications need some synchronization between threads for every request. Due to this, an expensive context switch happens between OS threads.

A possible solution to such problems is the use of asynchronous concurrent APIs. Common examples are CompletableFuture and RxJava. Provided that such APIs don’t block the kernel thread, it gives an application a finer-grained concurrency construct on top of Java threads.

On the other hand, such APIs are harder to debug and integrate with legacy APIs. And thus, there is a need for a lightweight concurrency construct which is independent of kernel threads.

4. Tasks and Schedulers

Any implementation of a thread, either lightweight or heavyweight, depends on two constructs:

  1. Task (also known as a continuation) – A sequence of instructions that can suspend itself for some blocking operation
  2. Scheduler – For assigning the continuation to the CPU and reassigning the CPU from a paused continuation

Presently, Java relies on OS implementations for both the continuation and the scheduler.

Now, in order to suspend a continuation, it’s required to store the entire call-stack. And similarly, retrieve the call-stack on resumption. Since the OS implementation of continuations includes the native call stack along with Java’s call stack, it results in a heavy footprint.

A bigger problem, though, is the use of OS scheduler. Since the scheduler runs in kernel mode, there’s no differentiation between threads. And it treats every CPU request in the same manner.

This type of scheduling is not optimal for Java applications in particular.

For example, consider an application thread which performs some action on the requests and then passes on the data to another thread for further processing. Here, it would be better to schedule both these threads on the same CPU. But since the scheduler is agnostic to the thread requesting the CPU, this is impossible to guarantee.

Project Loom proposes to solve this through user-mode threads which rely on Java runtime implementation of continuations and schedulers instead of the OS implementation.

5. Fibers

In the recent prototypes in OpenJDK, a new class named Fiber is introduced to the library alongside the Thread class.

Since the planned library for Fibers is similar to Thread, the user implementation should also remain similar. However, there are two main differences:

  1. Fiber would wrap any task in an internal user-mode continuation. This would allow the task to suspend and resume in Java runtime instead of the kernel
  2. A pluggable user-mode scheduler (ForkJoinPool, for example) would be used

Let’s go through these two items in detail.

6. Continuations

A continuation (or co-routine) is a sequence of instructions that can yield and be resumed by the caller at a later stage.

Every continuation has an entry point and a yield point. The yield point is where it was suspended. Whenever the caller resumes the continuation, the control returns to the last yield point.

It’s important to realize that this suspend/resume now occurs in the language runtime instead of the OS. Therefore, it prevents the expensive context switch between kernel threads.

Similar to threads, Project Loom aims to support nested fibers. Since fibers rely on continuations internally, it must also support nested continuations. To understand this better, consider a class Continuation that allows nesting:

Continuation cont1 = new Continuation(() -> {
    Continuation cont2 = new Continuation(() -> {
        //do something
        suspend(SCOPE_CONT_2);
        suspend(SCOPE_CONT_1);
    });
});

As shown above, the nested continuation can suspend itself or any of the enclosing continuations by passing a scope variableFor this reason, they are known as scoped continuations.

Since suspending a continuation would also require it to store the call stack, it’s also a goal of project Loom to add lightweight stack retrieval while resuming the continuation.

7. Scheduler

Earlier, we discussed the shortcomings of the OS scheduler in scheduling relatable threads on the same CPU.

Although it’s a goal for Project Loom to allow pluggable schedulers with fibers, ForkJoinPool in asynchronous mode will be used as the default scheduler. 

ForkJoinPool works on the work-stealing algorithm. Thus, every thread maintains a task deque and executes the task from its head. Furthermore, any idle thread does not block, waiting for the task and pulls it from the tail of another thread’s deque instead. 

The only difference in asynchronous mode is that the worker threads steal the task from the head of another deque.

ForkJoinPool adds a task scheduled by another running task to the local queue. Hence, executing it on the same CPU. 

8. Conclusion

In this article, we discussed the problems in Java’s current concurrency model and the changes proposed by Project Loom.

In doing so, we also defined tasks and schedulers and looked at how Fibers and ForkJoinPool could provide an alternative to Java using kernel threads.

Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – REST with Spring (eBook) (everywhere)
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.