eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Course – LJB – NPI EA (cat = Core Java)
announcement - icon

Code your way through and build up a solid, practical foundation of Java:

>> Learn Java Basics

Partner – LambdaTest – NPI EA (cat= Testing)
announcement - icon

Distributed systems often come with complex challenges such as service-to-service communication, state management, asynchronous messaging, security, and more.

Dapr (Distributed Application Runtime) provides a set of APIs and building blocks to address these challenges, abstracting away infrastructure so we can focus on business logic.

In this tutorial, we'll focus on Dapr's pub/sub API for message brokering. Using its Spring Boot integration, we'll simplify the creation of a loosely coupled, portable, and easily testable pub/sub messaging system:

>> Flexible Pub/Sub Messaging With Spring Boot and Dapr

1. Introduction

In this tutorial, we’ll take a look at ElasticJob, part of the Apache ShardingSphere project. We’ll see what it is, how to use it and what we can do with it.

2. What is ElasticJob?

ElasticJob is a sharded, distributed job scheduling system. It allows us to focus on writing the jobs themselves, whilst ElasticJob handles all of the other details.

ElasticJob also gives us support for various types of jobs, depending on exactly what we need to do:

  • Java-based jobs, which exist as classes in our application.
  • Script jobs, which allow us to run scripts on our host.
  • HTTP jobs, which make an HTTP call to a remote endpoint.

It will then handle everything necessary to schedule our jobs and distribute them across the nodes in our application. ElasticJob also automatically handles details such as failover if one of our shards fails, and handling misfired jobs.

When running our jobs, we define a number of shards to split the workload. ElasticJob will automatically distribute these shards across all available hosts in our cluster to ensure even load. If any hosts are added or removed from the cluster, the shards will automatically be redistributed to keep the load spread across all hosts.

3. Dependencies

Before using ElasticJob, we need to include the latest version in our build, which is 3.0.5 at the time of writing.

If we’re using Maven, we can include this dependency in our pom.xml file:

<dependency>
    <groupId>org.apache.shardingsphere.elasticjob</groupId>
    <artifactId>elasticjob-bootstrap</artifactId>
    <version>3.0.5</version>
</dependency>

We’ll also need to have an instance of Zookeeper at runtime to manage coordination between our shards.

At this point, we’re ready to start using it in our application.

4. Setting up ElasticJob

Once we’ve got our ElasticJob dependency set up, we’re ready to start using it.

The first thing we need to do is ensure we have a working ZooKeeper installation. We can do this using Docker for now:

$ docker run --rm -d -p 127.0.0.1:2181:2181 --name elasticjob-zookeeper zookeeper
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
....
2026-02-23 06:33:06,106 [myid:1] - INFO  [main:o.a.z.s.ZooKeeperServer@588] - Snapshot taken in 0 ms
2026-02-23 06:33:06,110 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::o.a.z.s.PrepRequestProcessor@138] - PrepRequestProcessor (sid:0) started, reconfigEnabled=false

We then need a CoordinatorRegistryCenter instance configured to point to our ZooKeeper instance:

CoordinatorRegistryCenter registryCenter = 
  new ZookeeperRegistryCenter(new ZookeeperConfiguration("localhost:2181", "my-service"));
registryCenter.init();

At this point, ElasticJob is set up and ready to use.

5. Writing a Job

Once ElasticJob is ready, we need to actually write some jobs to use with it.

5.1. Job Implementation

Our jobs are all written as an implementation of one of the subclasses of ElasticJob. In this case, we’ll subclass SimpleJob:

public class MyJob implements SimpleJob {

    @Override
    public void execute(ShardingContext context) {
        // Job implementation
    }
}

This gives us a single execute() method where we can implement our job. This is called automatically by ElasticJob. Then, what we do in our jobs is entirely up to us.

5.2. Job Configuration

Once we have a job class, we need to actually configure it. We do this by building a JobConfiguration instance:

JobConfiguration jobConfig = JobConfiguration.newBuilder("MyJob", 3)
  .cron("0 * * * * ?")
  .build();

The newBuilder() method takes the name of our job – which doesn’t need to match the class name – and the number of shards to run it over. We can then provide a cron expression describing how to schedule the job. In this case, it’s on the 0th second of every minute.

We’re also able to configure job parameters using the jobParameter() method:

JobConfiguration jobConfig = JobConfiguration.newBuilder("MyJob", 3)
  .jobParameter("Hello")
  // ... more configuration

Whatever is passed in here can be extracted inside the job class using the getJobParameter() method.

Further, we can provide sharding parameters using the shardingItemParameters() method:

JobConfiguration jobConfig = JobConfiguration.newBuilder("MyJob", 3)
  .shardingItemParameters("0=a,1=b,2=c")
  // ... more configuration

In this case, the provided string needs to be in a special format. It’s a comma-separated list of shard ID to value. So here we provide the value “a” to shard 0, “b” to shard 1, and so on.

Within our job, the getShardingParameter() call will get the correct value from this structured string. If no value was found, then we’ll get null back instead.

5.3. Scheduling our Job

Now that we’ve got a job and some job configuration, we’re ready to schedule our job. This is done using the ScheduleJobBootstrap class:

new ScheduleJobBootstrap(registryCenter, new MyJob(), jobConfig)
  .schedule();

Here we need to provide our registry center, job configuration and an instance of our job class. ElasticJob will then record our job details into the registry and arrange for it to be executed on the appropriate schedule.

As soon as this returns, our job will be ready to run across our entire cluster exactly as desired.

6. Job Types

We’ve seen how to create jobs and configure them to run as desired. However, ElasticJob gives us some flexibility on exactly how our jobs work to better fit our needs.

6.1. Simple Jobs

Simple jobs are any that implement the SimpleJob interface. This gives us a single method – void execute(ShardingContext) – that we implement for our entire job. This can then do anything we want within our Java code, and it will simply execute when our job is fired.

The provided ShardingContext instance gives us access to certain details. We have access to:

  • getShardingTotalCount() – the total number of configured shards for this job.
  • getShardingItem() – the 0-based index of this specific shard.
  • getJobParameter() – The job parameter that was configured, if any.
  • getShardingParameter() – The sharding parameter for this specific shard, if any.

We can use these in our Java code to influence job processing. For example, we might make use of the getShardingItem() value to know which of the shards we’re running on and what data to process.

6.2. Dataflow Jobs

Dataflow jobs provide an alternative to simple jobs when we need to process lists of items. These implement the DataflowJob<T> interface, where the generic parameter T is the type of item we want to process.

This interface requires us to implement two methods: one to fetch the data to process and another to handle processing this data:

public static class MyDataflowJob implements DataflowJob<MyItem> {
    private MyItemRepository repository;

    @Override
    public List<String> fetchData(ShardingContext shardingContext) {
        return repository.getUnprocessedItems();
    }

    @Override
    public void processData(ShardingContext shardingContext, List<MyItem> list) {
        LOG.info("Processing data {} for job {}", list, shardingContext);
    }
}

This allows us to decouple fetching our data from processing it. We can also configure our job to run in streaming mode:

JobConfiguration jobConfig = JobConfiguration.newBuilder("MyDataflowJob", 3)
  .setProperty("DataflowJobProperties.STREAM_PROCESS_KEY", "true")
  // ... more configuration

This causes us to loop between fetchData() and processData() until fetchData() returns either null or an empty list.

6.3. Script Jobs

As well as running jobs written in Java, we can trigger external scripts to perform the required actions. These can be any executable script on the host that’s running the job.

For these, we don’t need to write a job class at all. Instead, we provide the sentinel value “SCRIPT” and appropriate configuration for the script to run:

JobConfiguration jobConfig = JobConfiguration.newBuilder("MyScriptJob", 3)
  .cron("0/5 * * * * ?")
  .setProperty(ScriptJobProperties.SCRIPT_KEY, "/script.sh")
  .build();

new ScheduleJobBootstrap(registryCenter, "SCRIPT", jobConfig)
  .schedule();

This will then execute the command /script.sh every time the job runs, and so we can do whatever we need like this. Our ShardingContext will be provided as a JSON string as the first argument to our script.

6.4. HTTP Jobs

HTTP jobs allow us to make HTTP requests to a known server, triggering functionality on the remote system. For these, we also provide a sentinel value – this time “HTTP” – and configuration about the HTTP call to make:

JobConfiguration jobConfig = JobConfiguration.newBuilder("MyHttpJob", 3)
  .cron("0/5 * * * * ?")
  .setProperty(HttpJobProperties.URI_KEY, "https://example.com/job")
  .setProperty(HttpJobProperties.METHOD_KEY, "POST")
  .setProperty(HttpJobProperties.DATA_KEY, "source=Baeldung")
  .build()

new ScheduleJobBootstrap(registryCenter, "HTTP", jobConfig)
  .schedule();

This will cause ElasticJob to make an HTTP POST call to this URL every time the job is triggered. This will contain the provided data as the HTTP request body, and will also provide a JSON version of our ShardingContext in the HTTP header ShardingContext:

POST /job HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 15
Host: example.com
ShardingContext: {"jobName":"MyHttpJob","taskId":"MyHttpJob@-@0,1,2@-@READY@[email protected]@-@8253","shardingTotalCount":3,"jobParameter":"Hello","shardingItem":1,"shardingParameter":"b"}

source=Baeldung

Whatever server is then handling this request can execute as needed based on this information.

7. Summary

In this article, we took a very quick look at ElasticJob. There’s a lot more that we can do with this. Next time you need to manage scheduled jobs for your applications, why not give it a try?

As usual, all of the examples from this article are available over on GitHub.

Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)
guest
0 Comments
Oldest
Newest
Inline Feedbacks
View all comments