Let's get started with a Microservice Architecture with Spring Cloud:
Distributed Job Scheduling Using ElasticJob
Last updated: March 6, 2026
1. Introduction
In this tutorial, we’ll take a look at ElasticJob, part of the Apache ShardingSphere project. We’ll see what it is, how to use it and what we can do with it.
2. What is ElasticJob?
ElasticJob is a sharded, distributed job scheduling system. It allows us to focus on writing the jobs themselves, whilst ElasticJob handles all of the other details.
ElasticJob also gives us support for various types of jobs, depending on exactly what we need to do:
- Java-based jobs, which exist as classes in our application.
- Script jobs, which allow us to run scripts on our host.
- HTTP jobs, which make an HTTP call to a remote endpoint.
It will then handle everything necessary to schedule our jobs and distribute them across the nodes in our application. ElasticJob also automatically handles details such as failover if one of our shards fails, and handling misfired jobs.
When running our jobs, we define a number of shards to split the workload. ElasticJob will automatically distribute these shards across all available hosts in our cluster to ensure even load. If any hosts are added or removed from the cluster, the shards will automatically be redistributed to keep the load spread across all hosts.
3. Dependencies
Before using ElasticJob, we need to include the latest version in our build, which is 3.0.5 at the time of writing.
If we’re using Maven, we can include this dependency in our pom.xml file:
<dependency>
<groupId>org.apache.shardingsphere.elasticjob</groupId>
<artifactId>elasticjob-bootstrap</artifactId>
<version>3.0.5</version>
</dependency>
We’ll also need to have an instance of Zookeeper at runtime to manage coordination between our shards.
At this point, we’re ready to start using it in our application.
4. Setting up ElasticJob
Once we’ve got our ElasticJob dependency set up, we’re ready to start using it.
The first thing we need to do is ensure we have a working ZooKeeper installation. We can do this using Docker for now:
$ docker run --rm -d -p 127.0.0.1:2181:2181 --name elasticjob-zookeeper zookeeper
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
....
2026-02-23 06:33:06,106 [myid:1] - INFO [main:o.a.z.s.ZooKeeperServer@588] - Snapshot taken in 0 ms
2026-02-23 06:33:06,110 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::o.a.z.s.PrepRequestProcessor@138] - PrepRequestProcessor (sid:0) started, reconfigEnabled=false
We then need a CoordinatorRegistryCenter instance configured to point to our ZooKeeper instance:
CoordinatorRegistryCenter registryCenter =
new ZookeeperRegistryCenter(new ZookeeperConfiguration("localhost:2181", "my-service"));
registryCenter.init();
At this point, ElasticJob is set up and ready to use.
5. Writing a Job
Once ElasticJob is ready, we need to actually write some jobs to use with it.
5.1. Job Implementation
Our jobs are all written as an implementation of one of the subclasses of ElasticJob. In this case, we’ll subclass SimpleJob:
public class MyJob implements SimpleJob {
@Override
public void execute(ShardingContext context) {
// Job implementation
}
}
This gives us a single execute() method where we can implement our job. This is called automatically by ElasticJob. Then, what we do in our jobs is entirely up to us.
5.2. Job Configuration
Once we have a job class, we need to actually configure it. We do this by building a JobConfiguration instance:
JobConfiguration jobConfig = JobConfiguration.newBuilder("MyJob", 3)
.cron("0 * * * * ?")
.build();
The newBuilder() method takes the name of our job – which doesn’t need to match the class name – and the number of shards to run it over. We can then provide a cron expression describing how to schedule the job. In this case, it’s on the 0th second of every minute.
We’re also able to configure job parameters using the jobParameter() method:
JobConfiguration jobConfig = JobConfiguration.newBuilder("MyJob", 3)
.jobParameter("Hello")
// ... more configuration
Whatever is passed in here can be extracted inside the job class using the getJobParameter() method.
Further, we can provide sharding parameters using the shardingItemParameters() method:
JobConfiguration jobConfig = JobConfiguration.newBuilder("MyJob", 3)
.shardingItemParameters("0=a,1=b,2=c")
// ... more configuration
In this case, the provided string needs to be in a special format. It’s a comma-separated list of shard ID to value. So here we provide the value “a” to shard 0, “b” to shard 1, and so on.
Within our job, the getShardingParameter() call will get the correct value from this structured string. If no value was found, then we’ll get null back instead.
5.3. Scheduling our Job
Now that we’ve got a job and some job configuration, we’re ready to schedule our job. This is done using the ScheduleJobBootstrap class:
new ScheduleJobBootstrap(registryCenter, new MyJob(), jobConfig)
.schedule();
Here we need to provide our registry center, job configuration and an instance of our job class. ElasticJob will then record our job details into the registry and arrange for it to be executed on the appropriate schedule.
As soon as this returns, our job will be ready to run across our entire cluster exactly as desired.
6. Job Types
We’ve seen how to create jobs and configure them to run as desired. However, ElasticJob gives us some flexibility on exactly how our jobs work to better fit our needs.
6.1. Simple Jobs
Simple jobs are any that implement the SimpleJob interface. This gives us a single method – void execute(ShardingContext) – that we implement for our entire job. This can then do anything we want within our Java code, and it will simply execute when our job is fired.
The provided ShardingContext instance gives us access to certain details. We have access to:
- getShardingTotalCount() – the total number of configured shards for this job.
- getShardingItem() – the 0-based index of this specific shard.
- getJobParameter() – The job parameter that was configured, if any.
- getShardingParameter() – The sharding parameter for this specific shard, if any.
We can use these in our Java code to influence job processing. For example, we might make use of the getShardingItem() value to know which of the shards we’re running on and what data to process.
6.2. Dataflow Jobs
Dataflow jobs provide an alternative to simple jobs when we need to process lists of items. These implement the DataflowJob<T> interface, where the generic parameter T is the type of item we want to process.
This interface requires us to implement two methods: one to fetch the data to process and another to handle processing this data:
public static class MyDataflowJob implements DataflowJob<MyItem> {
private MyItemRepository repository;
@Override
public List<String> fetchData(ShardingContext shardingContext) {
return repository.getUnprocessedItems();
}
@Override
public void processData(ShardingContext shardingContext, List<MyItem> list) {
LOG.info("Processing data {} for job {}", list, shardingContext);
}
}
This allows us to decouple fetching our data from processing it. We can also configure our job to run in streaming mode:
JobConfiguration jobConfig = JobConfiguration.newBuilder("MyDataflowJob", 3)
.setProperty("DataflowJobProperties.STREAM_PROCESS_KEY", "true")
// ... more configuration
This causes us to loop between fetchData() and processData() until fetchData() returns either null or an empty list.
6.3. Script Jobs
As well as running jobs written in Java, we can trigger external scripts to perform the required actions. These can be any executable script on the host that’s running the job.
For these, we don’t need to write a job class at all. Instead, we provide the sentinel value “SCRIPT” and appropriate configuration for the script to run:
JobConfiguration jobConfig = JobConfiguration.newBuilder("MyScriptJob", 3)
.cron("0/5 * * * * ?")
.setProperty(ScriptJobProperties.SCRIPT_KEY, "/script.sh")
.build();
new ScheduleJobBootstrap(registryCenter, "SCRIPT", jobConfig)
.schedule();
This will then execute the command /script.sh every time the job runs, and so we can do whatever we need like this. Our ShardingContext will be provided as a JSON string as the first argument to our script.
6.4. HTTP Jobs
HTTP jobs allow us to make HTTP requests to a known server, triggering functionality on the remote system. For these, we also provide a sentinel value – this time “HTTP” – and configuration about the HTTP call to make:
JobConfiguration jobConfig = JobConfiguration.newBuilder("MyHttpJob", 3)
.cron("0/5 * * * * ?")
.setProperty(HttpJobProperties.URI_KEY, "https://example.com/job")
.setProperty(HttpJobProperties.METHOD_KEY, "POST")
.setProperty(HttpJobProperties.DATA_KEY, "source=Baeldung")
.build()
new ScheduleJobBootstrap(registryCenter, "HTTP", jobConfig)
.schedule();
This will cause ElasticJob to make an HTTP POST call to this URL every time the job is triggered. This will contain the provided data as the HTTP request body, and will also provide a JSON version of our ShardingContext in the HTTP header ShardingContext:
POST /job HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 15
Host: example.com
ShardingContext: {"jobName":"MyHttpJob","taskId":"MyHttpJob@-@0,1,2@-@READY@[email protected]@-@8253","shardingTotalCount":3,"jobParameter":"Hello","shardingItem":1,"shardingParameter":"b"}
source=Baeldung
Whatever server is then handling this request can execute as needed based on this information.
7. Summary
In this article, we took a very quick look at ElasticJob. There’s a lot more that we can do with this. Next time you need to manage scheduled jobs for your applications, why not give it a try?
As usual, all of the examples from this article are available over on GitHub.
















