Expand Authors Top

If you have a few years of experience in the Java ecosystem and you’d like to share that with the community, have a look at our Contribution Guidelines.

November Discount Launch 2022 – Top
We’re finally running a Black Friday launch. All Courses are 30% off until next Friday:


November Discount Launch 2022 – TEMP TOP (NPI)
We’re finally running a Black Friday launch. All Courses are 30% off until next Friday:


Expanded Audience – Frontegg – Security (partner)
announcement - icon User management is very complex, when implemented properly. No surprise here.

Not having to roll all of that out manually, but instead integrating a mature, fully-fledged solution - yeah, that makes a lot of sense.
That's basically what Frontegg is - User Management for your application. It's focused on making your app scalable, secure and enjoyable for your users.
From signup to authentication, it supports simple scenarios all the way to complex and custom application logic.

Have a look:

>> Elegant User Management, Tailor-made for B2B SaaS

1. Overview

In this article, we'll look at conflict-free replicated data types (CRDT) and how to work with them in Java. For our examples, we'll use implementations from the wurmloch-crdt library.

When we have a cluster of N replica nodes in a distributed system, we may encounter a network partition — some nodes are temporarily unable to communicate with each other. This situation is called a split-brain.

When we have a split-brain in our system, some write requests — even for the same user — can go to different replicas that are not connected with each other. When such a situation occurs, our system is still available but is not consistent.

We need to decide what to do with writes and data that are not consistent when the network between two split clusters starts working again.

2. Conflict-Free Replicated Data Types to the Rescue

Let's consider two nodes, A and B, that have become disconnected due to a split-brain.

Let's say that a user changes his login and that a request goes to the node A. Then he/she decides to change it again, but this time the request goes to the node B.

Because of the split-brain, the two nodes are not connected. We need to decide how the login of this user should look when the network is working again.

We can utilize a couple of strategies: we can give the opportunity for resolving conflicts to the user (as is done in Google Docs), or we can use a CRDT for merging data from diverged replicas for us.

3. Maven Dependency

First, let's add a dependency to the library that provides a set of useful CRDTs:


The latest version can be found on Maven Central.

4. Grow-Only Set

The most basic CRDT is a Grow-Only Set. Elements can only be added to a GSet and never removed. When the GSet diverges, it can be easily merged by calculating the union of two sets.

First, let's create two replicas to simulate a distributed data structure and connect those two replicas using the connect() method:

LocalCrdtStore crdtStore1 = new LocalCrdtStore();
LocalCrdtStore crdtStore2 = new LocalCrdtStore();

Once we get two replicas in our cluster, we can create a GSet on the first replica and reference it on the second replica:

GSet<String> replica1 = crdtStore1.createGSet("ID_1");
GSet<String> replica2 = crdtStore2.<String>findGSet("ID_1").get();

At this point, our cluster is working as expected, and there is an active connection between two replicas. We can add two elements to the set from two different replicas and assert that the set contains the same elements on both replicas:


assertThat(replica1).contains("apple", "banana");
assertThat(replica2).contains("apple", "banana");

Let's say that suddenly we have a network partition and there is no connection between the first and second replicas. We can simulate the network partition using the disconnect() method:


Next, when we add elements to the data set from both replicas, those changes are not visible globally because there is no connection between them:


assertThat(replica1).contains("apple", "banana", "strawberry");
assertThat(replica2).contains("apple", "banana", "pear");

Once the connection between both cluster members is established again, the GSet is merged internally using a union on both sets, and both replicas are consistent again:


  .contains("apple", "banana", "strawberry", "pear");
  .contains("apple", "banana", "strawberry", "pear");

5. Increment-Only Counter

Increment-Only counter is a CRDT that aggregates all increments locally on each node.

When replicas synchronize, after a network partition, the resulting value is calculated by summing all increments on all nodes — this is similar to LongAdder from java.concurrent but on a higher abstraction level.

Let's create an increment-only counter using GCounter and increment it from both replicas. We can see that the sum is calculated properly:

LocalCrdtStore crdtStore1 = new LocalCrdtStore();
LocalCrdtStore crdtStore2 = new LocalCrdtStore();

GCounter replica1 = crdtStore1.createGCounter("ID_1");
GCounter replica2 = crdtStore2.findGCounter("ID_1").get();



When we disconnect both cluster members and perform local increment operations, we can see that the values are inconsistent:




But once the cluster is healthy again, the increments will be merged, yielding the proper value:



6. PN Counter

Using a similar rule for the increment-only counter, we can create a counter that can be both incremented and decremented. The PNCounter stores all increments and decrements separately.

When replicas synchronize, the resulting value will be equal to the sum of all increments minus the sum of all decrements:

public void givenPNCounter_whenReplicasDiverge_thenMergesWithoutConflict() {
    LocalCrdtStore crdtStore1 = new LocalCrdtStore();
    LocalCrdtStore crdtStore2 = new LocalCrdtStore();

    PNCounter replica1 = crdtStore1.createPNCounter("ID_1");
    PNCounter replica2 = crdtStore2.findPNCounter("ID_1").get();








7. Last-Writer-Wins Register

Sometimes, we have more complex business rules, and operating on sets or counters is insufficient. We can use the Last-Writer-Wins Register, which keeps only the last updated value when merging diverged data sets. Cassandra uses this strategy to resolve conflicts.

We need to be very cautious when using this strategy because it drops changes that occurred in the meantime.

Let's create a cluster of two replicas and instances of the LWWRegister class:

LocalCrdtStore crdtStore1 = new LocalCrdtStore("N_1");
LocalCrdtStore crdtStore2 = new LocalCrdtStore("N_2");

LWWRegister<String> replica1 = crdtStore1.createLWWRegister("ID_1");
LWWRegister<String> replica2 = crdtStore2.<String>findLWWRegister("ID_1").get();



When the first replica sets the value to apple and the second one changes it to banana, the LWWRegister keeps only the last value.

Let's see what happens if the cluster disconnects:




Each replica keeps its local copy of data that is inconsistent. When we call the set() method, the LWWRegister internally assigns a special version value that identifies the specific update to every using a VectorClock algorithm.

When the cluster synchronizes, it takes the value with the highest version and discards every previous update:



8. Conclusion

In this article, we showed the problem of consistency of distributed systems while maintaining availability.

In case of network partitions, we need to merge the diverged data when the cluster is synchronized. We saw how to use CRDTs to perform a merge of diverged data.

All these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

November Discount Launch 2022 – Bottom
We’re finally running a Black Friday launch. All Courses are 30% off until next Friday:


Cloud footer banner
Comments are closed on this article!