eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Course – LJB – NPI EA (cat = Core Java)
announcement - icon

Code your way through and build up a solid, practical foundation of Java:

>> Learn Java Basics

Partner – LambdaTest – NPI EA (cat= Testing)
announcement - icon

Distributed systems often come with complex challenges such as service-to-service communication, state management, asynchronous messaging, security, and more.

Dapr (Distributed Application Runtime) provides a set of APIs and building blocks to address these challenges, abstracting away infrastructure so we can focus on business logic.

In this tutorial, we'll focus on Dapr's pub/sub API for message brokering. Using its Spring Boot integration, we'll simplify the creation of a loosely coupled, portable, and easily testable pub/sub messaging system:

>> Flexible Pub/Sub Messaging With Spring Boot and Dapr

1. Introduction

Apache Spark gives us powerful control over how an application runs on a cluster. One of the most important aspects of performance and resource utilization is the number of executors a job uses. Executors are the workers that run tasks assigned by the Spark driver. Configuring them correctly helps us improve performance and make good use of cluster resources.

In this tutorial, we explain what executors are, why their configuration matters, and how to set them in both static and dynamic allocation modes.

2. Understanding Spark Executors

An executor is a JVM process launched on a worker node. Executors perform three main tasks:

  • Run tasks assigned by the Spark driver
  • Store data in memory or on disk (for caching and shuffle operations)
  • Report execution status and metrics back to the driver

Each executor runs multiple tasks in parallel. The number of concurrent tasks depends on the number of CPU cores assigned to that executor. More executors usually increase parallelism, but only when sufficient data and cluster resources are available.

A Spark application always has one driver, but it can have many executors. The cluster manager (YARN, Kubernetes, or standalone) decides where executors run, while we decide how many executors Spark requests and how large they are.

To manage resources efficiently, Spark provides two main executor allocation strategies:

  • Static Allocation: Spark has a fixed number of executors.
  • Dynamic Allocation: Spark adjusts the number of executors at runtime based on workload.

3. Static Allocation

Static executor allocation means that the number of executors is fixed for the entire lifetime of the Spark application. Spark then requests exactly that number from the cluster manager and keeps them alive for the lifetime of the application.

Executors do not scale up or down dynamically. This approach works well for workloads with predictable resource requirements, such as daily batch jobs.

There are three main ways to configure static executors:

3.1. Using spark-submit

The standard way to launch a Spark application is with the spark-submit command-line tool. It is the primary entry point for running applications on a cluster and allows us to define resource requirements at submission time.

When submitting a Spark application, we can use the –num-executors option to set the number of executors. We usually combine this with –executor-cores and –executor-memory to define the resources per executor:

spark-submit \
  --class com.example.MyApp \
  --master yarn \
  --num-executors 8 \
  --executor-cores 4 \
  --executor-memory 8G \
  my-spark-app.jar

In this example, Spark launches 8 executors, each with 4 cores and 8 GB of memory. These executors remain allocated for the full runtime of the application.

3.2. Using Spark Configuration Files

We can also define static executors in spark-defaults.conf or other configuration files. This is useful when we want to externalize resource settings:

spark.executor.instances 8
spark.executor.cores 4
spark.executor.memory 8G

When dynamic allocation is disabled, Spark will use these settings whenever a job is submitted. This approach is ideal for standardizing resource allocation across multiple applications.

3.3. Programmatically in the Application

We can configure static executors programmatically by setting Spark properties in code before performing any actions:

SparkSession spark = SparkSession.builder()
  .appName("StaticExecutorExample")
  .config("spark.executor.instances", "8")
  .config("spark.executor.cores", "4")
  .config("spark.executor.memory", "8G")
  .getOrCreate();

This approach allows applications to decide executor configuration at runtime, which can be useful in dynamic deployment pipelines, although it’s less common than the other two methods.

Static allocation works best when the workload size is known and stable, especially on a dedicated or lightly shared cluster. It is best when predictable performance matters more than flexibility.

4. Dynamic Allocation

Dynamic executor allocation allows Spark to adjust the number of executors at runtime based on workload. Executors are added when there are pending tasks and removed when they remain idle for a certain period. This approach improves cluster utilization, especially in shared environments where multiple applications run concurrently.

Unlike static allocation, we don’t set a fixed number of executors. Instead, we define minimum and maximum bounds and let Spark scale within that range.

We can configure dynamic allocation using the same methods as static allocation, such as spark-submit, configuration files, or programmatic settings. To enable dynamic allocation through spark-submit, we can use the following configuration:

spark-submit \
  --conf spark.dynamicAllocation.enabled=true \
  --conf spark.dynamicAllocation.minExecutors=2 \
  --conf spark.dynamicAllocation.maxExecutors=20 \
  --conf spark.dynamicAllocation.initialExecutors=4 \
  --conf spark.shuffle.service.enabled=true \
  my-app.jar

In this example, we configure the lower and upper bounds for dynamic allocation:

  • minExecutors: Defines the minimum number of executors, which ensures a baseline level of parallelism.
  • maxExecutors: Sets the maximum number of executors, preventing Spark from requesting excessive cluster resources.
  • initialExecutors: controls how many executors Spark allocates when the application starts.

With these settings, Spark starts with 2 executors and can scale up to 20 as workload increases. Each executor runs 4 tasks in parallel. When executors remain idle for a configured timeout, Spark removes them automatically, allowing the cluster to reclaim unused resources efficiently.

Dynamic allocation is ideal for jobs with unpredictable or fluctuating resource requirements. It ensures efficient use of cluster resources without manually tuning executor counts for every job.

5. Executors and Parallelism

Executors determine how Spark runs tasks in parallel. Each executor has a fixed number of CPU cores. Spark schedules one task per core. It means the total parallelism of a job is roughly:

total_parallel_tasks = number_of_executors × executor_cores

For example, 8 executors with 4 cores each can run 32 tasks simultaneously.

Parallelism also depends on partitions. Each partition becomes one task. If we have fewer partitions than total cores, some cores remain idle. If we have many partitions, Spark processes tasks in batches across available cores.

Balancing the number of executors and cores per executor is important:

  • More executors with fewer cores: better load distribution, especially for many small tasks.
  • Fewer executors with more cores: can be efficient for CPU-intensive tasks but may cause contention.

In addition, Spark has settings like spark.default.parallelism, which control how many tasks it creates. Those settings interact with executors but do not directly allocate executors.

6. Conclusion

In this article, we explored how to configure executors for a Spark application. Configuring executors plays a crucial role in Spark performance and resource efficiency.

Static allocation offers predictable execution, while dynamic allocation adapts to changing workloads. Choosing the right approach helps us balance parallelism and cluster utilization effectively.

As always, the code is available over on GitHub.

Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)
guest
0 Comments
Oldest
Newest
Inline Feedbacks
View all comments