Let's get started with a Microservice Architecture with Spring Cloud:
Set Amount of Spark Executors
Last updated: February 22, 2026
1. Introduction
Apache Spark gives us powerful control over how an application runs on a cluster. One of the most important aspects of performance and resource utilization is the number of executors a job uses. Executors are the workers that run tasks assigned by the Spark driver. Configuring them correctly helps us improve performance and make good use of cluster resources.
In this tutorial, we explain what executors are, why their configuration matters, and how to set them in both static and dynamic allocation modes.
2. Understanding Spark Executors
An executor is a JVM process launched on a worker node. Executors perform three main tasks:
- Run tasks assigned by the Spark driver
- Store data in memory or on disk (for caching and shuffle operations)
- Report execution status and metrics back to the driver
Each executor runs multiple tasks in parallel. The number of concurrent tasks depends on the number of CPU cores assigned to that executor. More executors usually increase parallelism, but only when sufficient data and cluster resources are available.
A Spark application always has one driver, but it can have many executors. The cluster manager (YARN, Kubernetes, or standalone) decides where executors run, while we decide how many executors Spark requests and how large they are.
To manage resources efficiently, Spark provides two main executor allocation strategies:
- Static Allocation: Spark has a fixed number of executors.
- Dynamic Allocation: Spark adjusts the number of executors at runtime based on workload.
3. Static Allocation
Static executor allocation means that the number of executors is fixed for the entire lifetime of the Spark application. Spark then requests exactly that number from the cluster manager and keeps them alive for the lifetime of the application.
Executors do not scale up or down dynamically. This approach works well for workloads with predictable resource requirements, such as daily batch jobs.
There are three main ways to configure static executors:
3.1. Using spark-submit
The standard way to launch a Spark application is with the spark-submit command-line tool. It is the primary entry point for running applications on a cluster and allows us to define resource requirements at submission time.
When submitting a Spark application, we can use the –num-executors option to set the number of executors. We usually combine this with –executor-cores and –executor-memory to define the resources per executor:
spark-submit \
--class com.example.MyApp \
--master yarn \
--num-executors 8 \
--executor-cores 4 \
--executor-memory 8G \
my-spark-app.jar
In this example, Spark launches 8 executors, each with 4 cores and 8 GB of memory. These executors remain allocated for the full runtime of the application.
3.2. Using Spark Configuration Files
We can also define static executors in spark-defaults.conf or other configuration files. This is useful when we want to externalize resource settings:
spark.executor.instances 8
spark.executor.cores 4
spark.executor.memory 8G
When dynamic allocation is disabled, Spark will use these settings whenever a job is submitted. This approach is ideal for standardizing resource allocation across multiple applications.
3.3. Programmatically in the Application
We can configure static executors programmatically by setting Spark properties in code before performing any actions:
SparkSession spark = SparkSession.builder()
.appName("StaticExecutorExample")
.config("spark.executor.instances", "8")
.config("spark.executor.cores", "4")
.config("spark.executor.memory", "8G")
.getOrCreate();
This approach allows applications to decide executor configuration at runtime, which can be useful in dynamic deployment pipelines, although it’s less common than the other two methods.
Static allocation works best when the workload size is known and stable, especially on a dedicated or lightly shared cluster. It is best when predictable performance matters more than flexibility.
4. Dynamic Allocation
Dynamic executor allocation allows Spark to adjust the number of executors at runtime based on workload. Executors are added when there are pending tasks and removed when they remain idle for a certain period. This approach improves cluster utilization, especially in shared environments where multiple applications run concurrently.
Unlike static allocation, we don’t set a fixed number of executors. Instead, we define minimum and maximum bounds and let Spark scale within that range.
We can configure dynamic allocation using the same methods as static allocation, such as spark-submit, configuration files, or programmatic settings. To enable dynamic allocation through spark-submit, we can use the following configuration:
spark-submit \
--conf spark.dynamicAllocation.enabled=true \
--conf spark.dynamicAllocation.minExecutors=2 \
--conf spark.dynamicAllocation.maxExecutors=20 \
--conf spark.dynamicAllocation.initialExecutors=4 \
--conf spark.shuffle.service.enabled=true \
my-app.jar
In this example, we configure the lower and upper bounds for dynamic allocation:
- minExecutors: Defines the minimum number of executors, which ensures a baseline level of parallelism.
- maxExecutors: Sets the maximum number of executors, preventing Spark from requesting excessive cluster resources.
- initialExecutors: controls how many executors Spark allocates when the application starts.
With these settings, Spark starts with 2 executors and can scale up to 20 as workload increases. Each executor runs 4 tasks in parallel. When executors remain idle for a configured timeout, Spark removes them automatically, allowing the cluster to reclaim unused resources efficiently.
Dynamic allocation is ideal for jobs with unpredictable or fluctuating resource requirements. It ensures efficient use of cluster resources without manually tuning executor counts for every job.
5. Executors and Parallelism
Executors determine how Spark runs tasks in parallel. Each executor has a fixed number of CPU cores. Spark schedules one task per core. It means the total parallelism of a job is roughly:
total_parallel_tasks = number_of_executors × executor_cores
For example, 8 executors with 4 cores each can run 32 tasks simultaneously.
Parallelism also depends on partitions. Each partition becomes one task. If we have fewer partitions than total cores, some cores remain idle. If we have many partitions, Spark processes tasks in batches across available cores.
Balancing the number of executors and cores per executor is important:
- More executors with fewer cores: better load distribution, especially for many small tasks.
- Fewer executors with more cores: can be efficient for CPU-intensive tasks but may cause contention.
In addition, Spark has settings like spark.default.parallelism, which control how many tasks it creates. Those settings interact with executors but do not directly allocate executors.
6. Conclusion
In this article, we explored how to configure executors for a Spark application. Configuring executors plays a crucial role in Spark performance and resource efficiency.
Static allocation offers predictable execution, while dynamic allocation adapts to changing workloads. Choosing the right approach helps us balance parallelism and cluster utilization effectively.
As always, the code is available over on GitHub.
















