Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Browser testing is essential if you have a website or web applications that users interact with. Manual testing can be very helpful to an extent, but given the multiple browsers available, not to mention versions and operating system, testing everything manually becomes time-consuming and repetitive.

To help automate this process, Selenium is a popular choice for developers, as an open-source tool with a large and active community. What's more, we can further scale our automation testing by running on theLambdaTest cloud-based testing platform.

Read more through our step-by-step tutorial on how to set up Selenium tests with Java and run them on LambdaTest:

>> Automated Browser Testing With Selenium

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

1. Introduction

In this tutorial, we’ll explore the TigerBeetle database engine and learn how we can use it to build a fault-tolerant and high-performance application.

2. Financial Transactions in a Nutshell

Every time we use a debit or credit to buy something online or in a store, there will be a transaction where some currency amount will be transferred from your account to the merchant’s.

Behind the scenes, fees must be deducted from the value transferred and then split among all parties involved (acquirer, card processing company, banks, etc). All those entities must also keep detailed transaction logs, also known as ledgers, after the books where accountants used to keep them.

Nowadays, most financial transaction systems rely on database engines, such as Oracle, SQL Server, and DB2, to store transactions. A typical system will have an accounts table that holds balances and a transactions table that logsĀ every debt or credit made to the accounts.

While this works well, the general purpose of these databases leads to several inefficiencies and, consequently, requires much more resources to deploy and operate at large scales.

3. TigerBeetle’s Approach

A newcomer in the crowded database market, TigerBeetle is a specialized database engine that focuses primarily on financial transactions. By doing so, it can get rid of most of the complexity associated with a general-purpose database engine and, in exchange, claims to be able to deliver up to 1000x throughput improvement.

These are the main simplifications that make this improvement possible:

  • Fixed schema
  • Fixed-point arithmetic
  • Batched transactions
  • No general query capabilities

Perhaps the most surprising of these is the fixed schema. TigerBeetle has just two entities: Accounts and Transfers.

An Account stores the balance of some asset (current, stocks, bitcoins, etc.), which can be anything that we can acquire or transfer to/from another Account belonging to the same ledger. An Account also has a few fields that are intended to hold external identifiers, allowing us to link it to traditional system-of-records databases.

To add some credit to an Account we create a Transfer instance that contains the amount, the source Account from which we’ll deduce this amount, and the destination Account.

Those are some important features of Accounts and Transfers:

  • Ā Once created, an Account cannot be deleted
  • Ā The Account‘s initial balance is always zero
  • Ā Transfers are immutable. Once committed they cannot be modified or deleted
  • Ā Transfers require both Accounts to be on the same ledger
  • At all times, the sum of debits and credits over all AccountsĀ is zero

4. Deploying TigerBeetle

TigerBeetle is distributed as a statically linked executable, available at the official site.

Before using TigerBeetle, we need to create a data file where it will store its data. This is done using the format command:

$ tigerbeetle format --cluster=0 --replica=0 --replica-count=1 0_0.tigerbeetle

We can now start a standalone instance using the start command:

$ tigerbeetle start --addresses=3000 0_0.tigerbeetle

5. Using TigerBeetle from Java Applications

This is the TigerBeetle’s official client Maven dependency:

<dependency>
    <groupId>com.tigerbeetle</groupId>
    <artifactId>tigerbeetle-java</artifactId>
    <version>0.15.3</version>
</dependency>

The latest version of this library is available from Maven Central.

Important notice: This dependency contains platform-specific native code. Make sure to run your client only on supported architectures

5.1. Connecting to TigerBeetle

The entry point to access TigerBeetle’s functionalities is the Client class. Client instances are thread-safe, so we just need to create a single instance of it in our applications. For Spring-based applications, the simplest approach is to define a @Bean in a @Configuration class, so we can inject it when needed:

@Configuration
public class TigerBeetleConfig {
    @Value("${tigerbeetle.clusterID:0}")
    private BigInteger clusterID;

    @Value("${tb_address:3000}")
    private String[] replicaAddress;

    @Bean
    Client tigerBeetleClient() {
        return new Client(UInt128.asBytes(clusterID), replicaAddress);
    }
}

5.2. Creating Accounts

TigerBeetle’s API doesn’t come with any domain object, so let’s create a simple Account record to store the data we need to create one:

@Builder
public record Account(
    UUID id,
    BigInteger accountHolderId,
    int code,
    int ledger,
    int userData32,
    long userData64,
    BigInteger creditsPosted,
    BigInteger creditsPending,
    BigInteger debtsPosted,
    BigInteger debtsPending,
    int flags,
    long timestamp) {
}

Here, we’ve used a UUID account identifier as a convenience. Internally, TigerBeetle uses 128-bit integers as account identifiers, which the Java API maps to 16-byte arrays. Our domain class has an accountHolderId, which maps to the userData128Ā field.

The API uses 128-bit integers in many places, but since Java has no equivalent native datatype, it provides the UInt128 utility class that helps to convert from arrays to other formats. Besides UUIDs, we can also use BigIntegers or a pair of regular long integers.

The userData128,Ā userData32, andĀ userData64 fields’ main use is to store secondary identifiers associated with this account. For example, we can use them to store the identifier of this account in an external database.

Now, let’s create an AccountRepository and implement a createAccount() method:

@RequiredArgsConstructor
public class AccountRepository {
    private final Client client;

    public Account createAccount(BigInteger accountHolderId, int code, int ledger,
      int userData32, long userData64, int flags ) {
        AccountBatch batch = new AccountBatch(1);
        byte[] id = UInt128.id(); 
        batch.add();
        batch.setId(id);
        batch.setUserData128(UInt128.asBytes(accountHolderId));
        batch.setLedger(ledger);
        batch.setCode(code);
        batch.setFlags(AccountFlags.HISTORY | flags);

        CreateAccountResultBatch result = client.createAccounts(batch);
        if(result.getLength() > 0) {
            result.next();
            throw new AccountException(result.getResult());
        }

        return findAccountById(UInt128.asUUID(id)).orElseThrow();
    }

   // ... other repository methods omitted
}

The implementation starts by creating an AccountBatch object to hold the batch data. In this example, the batch consists of a single account creation command, but we could easily extend this model to accept multiple requests.

Notice the AccountFlags.HISTORY flag. When set, we’ll be able to query historical balances, as we’ll see later. Also important is the use ofĀ UInt128.id() to generate the Account identifier. Values returned from this method are unique and time-based, meaning that if we compare them, we can determine which one was created first.

Once we’ve populated the batch request, we send it to TigerBeetle using the createAccounts() method. This method returns a CreateAccountResultBatch, which will be empty in case of a successful request. Otherwise, there will be an entry for each failed creation request containing the reason for the failure.

As a convenience to the caller, the method returns the Account domain object populated from data recovered from the database, which includes the actual creation timestamp as set by TigerBeetle.

5.3. Looking Up Accounts

To implement findAccountById() we follow a similar pattern as in the previous case. Firstly, we create a batch to hold the identifiers we want to find. To make things simpler, we’ll limit ourselves to just a single account per call.

Next, we submit this batch to TigerBeetle and process the results:

public Optional<Account> findAccountById(UUID id) throws ConcurrencyExceededException {
    IdBatch idBatch = new IdBatch(UInt128.asBytes(id));
    var batch = client.lookupAccounts(idBatch);

    if (!batch.next()) {
        return Optional.empty();
    }

    return Optional.of(mapFromCurrentAccountBatch(batch));
 }

Notice the use of next() to determine whether the given identifier exists or not. This works because of the single account limitation mentioned above.

A variant of this method that supports multiple identifiers is available in the source code. There, we populate the resulting Map using the returned values, leaving null entries for any identifier not found.

5.4. Creating Simple Transfers

Let’s start with the simplest case: a Transfer between two accounts belonging to the same ledger. Besides the source and destination accounts and ledger, we’ll also allow users of our repository to add some metadata: code, userData128, userData64, and userData32.Ā Although optional, these metadata fields are useful to link this Transfer to external systems.

public UUID createSimpleTransfer(UUID sourceAccount, UUID targetAccount, BigInteger amount,
  int ledger, int code, UUID userData128, long userData64, int userData32)  {
    var id = UInt128.id();
    var batch = new TransferBatch(1);

    batch.add();
    batch.setId(id);
    batch.setAmount(amount);
    batch.setCode(code);
    batch.setCreditAccountId(UInt128.asBytes(targetAccount));
    batch.setDebitAccountId(UInt128.asBytes(sourceAccount));
    batch.setUserData32(userData32);
    batch.setUserData64(userData64);
    batch.setUserData128(UInt128.asBytes(userData128));
    batch.setLedger(ledger);

    var batchResults = client.createTransfers(batch);

    if (batchResults.getLength() > 0) {
        batchResults.next();
        throw new TransferException(batchResults.getResult());
    }
    return UInt128.asUUID(id);
}

If the operation succeeds, the amount will be added to the source Account‘s debitsPosted field and the destination Account‘s creditsPosted.

5.5. Balance Queries

When an Account is created with the HISTORY flag set, we can query its balances as they change as a result of a transfer. The API expects an AccountFilter filled with the Account identifier and a time range. The filter also supports parameters to limit the amount and order of returned entries.

This is how we’ve used the getAccountBalances() method to implement the listAccountBalances() repository’s method:

List<Balance> listAccountBalances(UUID accountId, Instant start, Instant end, int limit, boolean lastFirst) {
    var filter = new AccountFilter();
    filter.setAccountId(UInt128.asBytes(accountId));
    filter.setCredits(true);
    filter.setDebits(true);
    filter.setLimit(limit);
    filter.setReversed(lastFirst);
    filter.setTimestampMin(start.toEpochMilli());
    filter.setTimestampMax(end.toEpochMilli());

    var batch = client.getAccountBalances(filter);
    var result = new ArrayList<Balance>();
    while(batch.next()) {
        result.add(
          Balance.builder()
            .accountId(accountId)
            .debitsPending(batch.getDebitsPending())
            .debitsPosted(batch.getDebitsPosted())
            .creditsPending(batch.getCreditsPending())
            .creditsPosted(batch.getCreditsPosted())
            .timestamp(Instant.ofEpochMilli(batch.getTimestamp()))
            .build()
        );
    }

    return result;
 }

Notice that the API’s result doesn’t include information about the associated transactions, thus limiting its use in practice. However, as mentioned in the official documentation, this API is likely to change in future versions.

5.6. Transfer Queries

Currently, the getAccountTransfers() is the most useful of the available query APIs – not a big achievement given there are only two ;^). It works similarly to getAccountBalances(), including the use of AccountFilter to specify the query criteria:

public List<Transfer> listAccountTransfers(UUID accountId, Instant start, Instant end, int limit, boolean lastFirst) {
    var filter = new AccountFilter();
    filter.setAccountId(UInt128.asBytes(accountId));
    filter.setCredits(true);
    filter.setDebits(true);
    filter.setReversed(lastFirst);
    filter.setTimestampMin(start.toEpochMilli());
    filter.setTimestampMax(end.toEpochMilli());
    filter.setLimit(limit);

    var batch = client.getAccountTransfers(filter);
    var result = new ArrayList<Transfer>();
    while(batch.next()) {
        result.add(Transfer.builder()
          .id(UInt128.asUUID(batch.getId()))
          .code(batch.getCode())
          .amount(batch.getAmount())
          .flags(batch.getFlags())
          .ledger(batch.getLedger())
          .creditAccountId(UInt128.asUUID(batch.getCreditAccountId()))
          .debitAccountId(UInt128.asUUID(batch.getDebitAccountId()))
          .userData128(UInt128.asUUID(batch.getUserData128()))
          .userData64(batch.getUserData64())
          .userData32(batch.getUserData32())
          .timestamp(Instant.ofEpochMilli(batch.getTimestamp()))
          .pendingId(UInt128.asUUID(batch.getPendingId()))
          .build());
    }

    return result;
}

5.7. Two-Phase Transfers

TigerBeetle makes a clear distinction between pending and posted transfers. This distinction is made evident by the fact that an Account has four balance fields: two for posted and two for pending values.

In our earlier Transfer example, we didn’t inform its type. In this case, the API defaults to a posted Transfer, meaning the amount will be added directly to the debits_posted or credits_posted field.

To create a pending Transfer, we have to set the PENDING flag:

public UUID createPendingTransfer(UUID sourceAccount, UUID targetAccount, BigInteger amount,
  int ledger, int code, UUID userData128, long userData64, int userData32) throws ConcurrencyExceededException {

    var id = UInt128.id();
    var batch = new TransferBatch(1);
    // ... fill batch data (same as regular Transfer)
    batch.setFlags(TransferFlags.PENDING);

    // ... send transfer and handle results (same as regular Transfer) 
}

A pending Transfer should always be confirmed (POST_PENDING) or canceled (VOID_PENDING) by a later Transfer request. In both cases, we must include the original Transfer identifier in the pendingId field:

public UUID completePendingTransfer(UUID pendingId, boolean success) throws ConcurrencyExceededException {
    var id = UInt128.id();
    var batch = new TransferBatch(1);

    batch.add();
    batch.setId(id)
    batch.setPendingId(UInt128.asBytes(pendingId));
    batch.setFlags(success? TransferFlags.POST_PENDING_TRANSFER : TransferFlags.VOID_PENDING_TRANSFER);

    var batchResults = client.createTransfers(batch);

    if (batchResults.getLength() > 0) {
        batchResults.next();
        throw new TransferException(batchResults.getResult());
    }
    return UInt128.asUUID(id);
}

A typical scenario where this feature can be used is an authorization server that handles requests from an ATM. Firstly, the client informs his account and the requested amount to withdraw. The authorization server then creates a PENDING transaction and returns the generated Transfer identifier.

Next, the ATM proceeds to dispense the money. There are two possible outcomes: if everything goes right, the ATM sends another message to the authorization server and confirms the Transfer.

However, if something goes wrong (e.g. there are no bills available or a stuck dispenser), the ATM cancels the Transfer.

5.8. Two-Phase Transfer Timeouts

To account for a communication failure happening between the initial authorization request and its confirmation or cancellation, we can pass an optional timeout in the first step:

public UUID createExpirablePendingTransfer(UUID sourceAccount, UUID targetAccount, BigInteger amount,
  int ledger, int code, UUID userData128, long userData64, int userData32, int timeout) throws ConcurrencyExceededException {

    var id = UInt128.id();
    var batch = new TransferBatch(1);
    // ... prepare batch (same as regular pending Transfer)
    batch.setTimeout(timeout);

    // ... send batch and handle results (same as regular pending Transfer)
}

If the timeout expires before receiving a request to confirm or cancel it, TigerBeetle will automatically the pending transaction.

5.9. Linked Operations

Often, it’s important to ensure that a group of operations sent to TigerBeetle must either complete or fail as a whole. We can think of it as an analog to regular database transactions, where we can issue multiple inserts and commit them all at once at the end.

To support this scenario TigerBeetle has the concept of linked events. In a nutshell, to create a group of Account or Transfer records as a single transaction, all items except for the last must have the linked flag set:

public List<Map.Entry<UUID,CreateTransferResult>> createLinkedTransfers(List<Transfer> transfers) 
  throws ConcurrencyExceededException {
    var results = new ArrayList<Map.Entry<UUID,CreateTransferResult>>(transfers.size());
    var batch = new TransferBatch(transfers.size());
    for ( Transfer t : transfers) {
        byte[] id = UInt128.id();
        batch.add();
        batch.setId(id);

        // Is this the last transfer to add ?
        if ( batch.getPosition() != transfers.size() -1 ) {
            batch.setFlags(TransferFlags.LINKED);
        }

        batch.setLedger(t.ledger());
        batch.setAmount(t.amount());
        batch.setDebitAccountId(UInt128.asBytes(t.debitAccountId()));
        batch.setCreditAccountId(UInt128.asBytes(t.creditAccountId()));
        if ( t.userData128() != null) {
            batch.setUserData128(UInt128.asBytes(t.userData128()));
        }
        batch.setCode(t.code());
        results.add(new AbstractMap.SimpleImmutableEntry<>(UInt128.asUUID(id), CreateTransferResult.Ok));
    }

    var batchResult = client.createTransfers(batch);
    while(batchResult.next()) {
        var original = results.get(batchResult.getIndex());
        results.set(batchResult.getIndex(), new AbstractMap.SimpleImmutableEntry<>(original.getKey(), batchResult.getResult()));
    }

    return results;
}

TigerBeetle ensures that linked operations will be executed in order and either committed or rolled back fully. Also important is the fact that the side effects of one operation will be visible to the next in the chain.

For instance, consider an Account created with the DEBITS_MUST_NOT_EXCEED_CREDITS flag. If we create two linked transfer commands such that the second one results in an overdraft, both transfers will be rejected:

@Test
void whenSimpleTransfer_thenSuccess() throws Exception {
    var MY_LEDGER = 1000;
    var CHECKING_ACCOUNT = 1000;
    var P2P_TRANSFER = 500;

    var liabilitiesAcc = repo.createAccount(
      BigInteger.valueOf(1000L),
      CHECKING_ACCOUNT,
      MY_LEDGER, 0,0, 0);

    var sourceAcc = repo.createAccount(
      BigInteger.valueOf(1001L),
      CHECKING_ACCOUNT,
      MY_LEDGER, 0,0, AccountFlags.DEBITS_MUST_NOT_EXCEED_CREDITS);

    var targetAcc = repo.createAccount(
      BigInteger.valueOf(1002L),
      CHECKING_ACCOUNT,
      MY_LEDGER, 0, 0, 0);

    List<Transfer> transfers = List.of(
      Transfer.builder()
        .debitAccountId(liabilitiesAcc.id())
        .ledger(MY_LEDGER)
        .code(P2P_TRANSFER)
        .creditAccountId(sourceAcc.id())
        .amount(BigInteger.valueOf(1_000L))
        .build(),
      Transfer.builder()
        .debitAccountId(sourceAcc.id())
        .ledger(MY_LEDGER)
        .code(P2P_TRANSFER)
        .creditAccountId(targetAcc.id())
        .amount(BigInteger.valueOf(2_000L))
        .build()
      );

    var results = repo.createLinkedTransfers(transfers);
    assertEquals(2, results.size());
    assertEquals(CreateTransferResult.LinkedEventFailed, results.get(0).getValue());
    assertEquals(CreateTransferResult.ExceedsCredits, results.get(1).getValue());
}

In this case, we see that the firstĀ Transfer, which would succeed in a non-linked scenario, fails because the second one would result in an overdraft.

6. Conclusion

In this article, we’ve explored the TigerBeetle database and its features. Despite its limited query capabilities, it has great performance and runtime guarantees, making it a good candidate for every application where its double-entry ledger model is applicable.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)