Baeldung Pro – Ops – NPI EA (cat = Baeldung on Ops)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

Partner – Orkes – NPI EA (cat=Kubernetes)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

1. Overview

Kubernetes is an industry-standard platform for orchestrating and managing containerized applications. By default, Kubernetes keeps its master nodes dedicated to managing control plane components such as the API server, etcd, controller manager, and scheduler – to ensure the system’s stability. However, there are times when it’s useful for us to schedule our workloads on master nodes, such as in resource-constrained environments or testing setups.

In this tutorial, we’ll walk through the steps for enabling pod scheduling on Kubernetes master nodes. This approach is especially useful when maximizing our resources is essential, though it’s important to consider the potential impact on system stability.

2. Why Pod Scheduling on Master Nodes Is Disabled by Default

In Kubernetes, the master node acts as the cluster’s brain, coordinating resources and workloads. By design, it performs critical tasks such as managing the cluster state, scheduling workloads on worker nodes, and handling other essential operations. If we allow our applications to run on the master node, it can introduce risks, such as:

  • Resource Contention: Applications running on master nodes can compete with control plane components for CPU, memory, and network resources, potentially affecting the cluster’s responsiveness and stability
  • Security Concerns: Isolating workloads from the master node can add a layer of security, reducing the likelihood of vulnerabilities affecting core cluster components
  • Performance Degradation: High resource usage by our applications can affect the control plane processes, leading to performance degradation across the entire cluster. This can cause delays in scheduling, resource allocation, and overall cluster responsiveness

While these potential issues exist, there are scenarios where scheduling pods on the master nodes can make sense. For example, in testing environments, smaller clusters, or for certain system-level tasks, allowing pods on our master nodes can prove beneficial.

3. How Kubernetes Controls Pod Scheduling with Taints

Kubernetes manages pod scheduling on nodes using a concept called taints and tolerations. Taints prevent pods from being scheduled on specific nodes unless they have a corresponding toleration. By default, we mark master nodes with a taint that restricts our pods from being scheduled on them, unless we explicitly override this setting.

In particular, Kubernetes applies the following taint to master nodes:

node-role.kubernetes.io/master:NoSchedule

The NoSchedule effect prevents all unscheduled pods from being placed on the master node unless they explicitly tolerate this taint. To enable pod scheduling, we can modify or remove this taint.

4. Steps to Enable Pod Scheduling on the Master Node

Let’s go through the steps required to enable pod scheduling on a Kubernetes master node by removing its default taint. This approach allows us to schedule workloads on the master node while preserving control over when and where this scheduling occurs.

4.1. Verify the Taint on the Master Node

To begin, let’s identify the current taint applied to our master node. Using the following command, we can inspect the taints assigned to any node, including the master:

kubectl describe node <master-node-name>

We replace <master-node-name> with the actual name of our master node. In the output, look for the Taints section there we should see the following entry:

Taints:             node-role.kubernetes.io/master:NoSchedule

Alternatively, we can use the grep tool to extract only the Taints section from the output of the kubectl describe command. This approach helps us focus specifically on the taints applied to the node. Here’s how we can do it:

kubectl describe node <master-node-name> | grep Taints

The presence of NoSchedule here confirms that we set the master node to reject our pods.

4.2. Remove the Taint to Allow Scheduling

Next, let’s remove this taint to allow pods to be scheduled on the master node. To remove the taint, we’ll run the following command:

kubectl taint nodes <master-node-name> node-role.kubernetes.io/master:NoSchedule-

The hyphen (-) at the end of the command signifies that we’re removing this taint from the node. After running the command, we can verify that the taint has been successfully removed by re-running the kubectl describe node command.

Removing the taint signals to Kubernetes that this master node is now available for scheduling our workloads. This change affects all future scheduling decisions, meaning any pod without specific node constraints can be assigned to this node.

4.3. Test Pod Scheduling on the Master Node

To confirm that our master node is now available to schedule our workloads, we can create a test pod. Let’s use a simple pod manifest for testing purposes:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: nginx
    image: nginx

Let’s save this manifest in a file named test-pod.yaml, and apply it using the following command:

kubectl apply -f test-pod.yaml

Once we create the pod, we can also verify that it is running on the master node with the following command:

kubectl describe pod test-pod

In the output, we should see the Node field showing the master node as the pod’s location, indicating that we successfully scheduled it.

4.4. Reapplying the Taint

If we no longer wish to schedule workloads on the master node, or if the node’s resources become constrained, we can reapply the original taint to restrict scheduling. To do this, let’s use the command below:

kubectl taint nodes <master-node-name> node-role.kubernetes.io/master:NoSchedule

By reapplying the NoSchedule taint, we effectively revert the master node to its default state, limiting it to system-related processes and control plane functions only.

5. Best Practices for Scheduling Pods on Master Nodes

While enabling pod scheduling on the master node is possible, it’s crucial to manage this configuration carefully, especially in production environments. Here are some best practices to consider:

  • Monitor Resource Usage: Monitoring CPU, memory, and network utilization of the master node helps ensure that our workloads don’t impact control plane components
  • Limit the Number of Pods: If running workloads on the master node is necessary, we should carefully manage the number of scheduled pods to prevent resource contention and ensure the control plane components continue functioning reliably
  • Use Node Selectors and Affinity Rules: Kubernetes provides node selectors and affinity rules that allow us to direct specific workloads to designated nodes, including the master node, when necessary. These configurations give us more control over workload placement and help ensure we use resources efficiently
  • Isolate Workloads Based on Priority: To better manage workload placement, we can use Kubernetes PriorityClasses to prioritize system workloads and applications. This ensures that critical workloads always have access to resources, even during periods of high demand

By following these best practices, we can mitigate some of the potential risks and keep the cluster stable while still utilizing master node resources.

6. Conclusion

Enabling pod scheduling on Kubernetes master nodes can be useful in specific cases, such as smaller or development clusters where optimizing resource usage is essential. However, in production environments, this approach requires caution, as it can significantly affect the cluster’s performance and stability.

In this tutorial, we explored steps to enable pod scheduling by removing the default taint on the master node, along with guidance on when and how to use this setup effectively. By understanding these configurations and their impact, we can make informed decisions on the best way to optimize our cluster’s resources.