Baeldung Pro – Ops – NPI EA (cat = Baeldung on Ops)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

1. Overview

The Kubernetes Dashboard is like a control center for managing and troubleshooting Kubernetes clusters. It’s a web-based interface that gives us a bird’s-eye view of everything happening in our cluster.

However, sometimes we might run into a frustrating roadblock—the “no endpoints available for service kubernetes-dashboard” error. This error locks us out of the Dashboard, hindering our ability to manage our cluster.

This error usually means something’s wrong with the Kubernetes service that’s supposed to connect us to the Dashboard. It could be a simple misconfiguration, or maybe the pods behind the scenes aren’t working as they should.

Whatever the cause, it’s essential to fix it so we can get back to managing our cluster.

In this tutorial, we’ll walk through the steps to troubleshoot and resolve the error. We’ll cover common causes and provide practical solutions.

As a result, we can regain control of our Kubernetes Dashboard and keep our cluster running smoothly.

2. Understanding the Error

Before troubleshooting, let’s understand the core of the “no endpoints available for service kubernetes-dashboard” error.

In Kubernetes, a Service acts as an internal load balancer. It provides a stable network address for a group of Pods. However, a Service relies on Endpoints to actually know which Pods to send traffic to.

If a Service has no Endpoints or is unreachable due to network issues or pod failures, we get the “no endpoints available for service kubernetes-dashboard” error.

Now, what causes this error?

Several common culprits include situations where the pods the Service is supposed to connect to might not be running correctly. They could be in a Pending state, stuck in a CrashLoopBackOff loop, or simply not scheduled yet.

Additionally, network connectivity problems within the cluster can prevent the Service from discovering or reaching its Endpoints.

This could be due to misconfigured network policies, Container Network Interface (CNI) plugin issues, or even general network outages.

Finally, misconfigurations in the Service definition, such as incorrect selector labels or typos in the Service name, can also lead to this error.

3. Making Preliminary Checks

When faced with the “no endpoints available for service kubernetes-dashboard” error, starting with some basic checks to narrow down the potential causes is essential.

These initial investigations can help us focus our troubleshooting efforts and identify the root of the problem more efficiently.

3.1. Check the Status of the Dashboard Pods

To begin our investigation, let’s check the health of the Kubernetes Dashboard pods responsible for the Dashboard’s web interface and backend services.

We can use the following kubectl command to get a list of all the pods in the kubernetes-dashboard namespace:

$ kubectl get pods -n kubernetes-dashboard
NAME                                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-api-6bffffcc78-v42vm               1/1     Running   0          151m
kubernetes-dashboard-auth-5d8486d4f4-xqkwn              1/1     Running   0          151m
kubernetes-dashboard-kong-7696bb8c88-q6swd              1/1     Running   0          151m
kubernetes-dashboard-metrics-scraper-5485b64c47-4x5nz   1/1     Running   0          151m
kubernetes-dashboard-web-84f8d6fff4-6k8lh               1/1     Running   0          151m

The above command provides a tabular output with details about each pod, including its name, readiness status, current state, and any recent restarts.

The STATUS column is particularly crucial for diagnosing the “no endpoints available for service kubernetes-dashboard” error.

Running status indicates a healthy pod with all containers operating normally. However, if the Dashboard pods show a Pending status, it suggests they are waiting to be scheduled, possibly due to resource limitations or node issues.

Similarly, a CrashLoopBackOff status signals that the pod’s containers are repeatedly crashing and restarting. This often points to configuration errors or application-level problems.

Therefore, by carefully examining the status of the Kubernetes Dashboard pods, we can gain valuable clues about the potential cause of the error.

If the pods aren’t in the Running state, further investigation is necessary to address the underlying issues preventing them from starting or remaining healthy.

3.2. Inspect Service Configuration

Now that we’ve checked the pod statuses, let’s turn our attention to the Kubernetes Dashboard Service itself. The Service configuration acts as a bridge between the Dashboard and its underlying pods.

Moreover, any misconfigurations here can lead to the “no endpoints available for service kubernetes-dashboard” error.

To get a detailed look at the Service configuration, we can use the kubectl describe command:

$ kubectl describe service kubernetes-dashboard -n kubernetes-dashboard
Name:              kubernetes-dashboard-api
Namespace:         kubernetes-dashboard
Labels:            app.kubernetes.io/component=api
                   app.kubernetes.io/instance=kubernetes-dashboard
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=kubernetes-dashboard-api
                   app.kubernetes.io/part-of=kubernetes-dashboard
                   app.kubernetes.io/version=1.7.0
                   helm.sh/chart=kubernetes-dashboard-7.5.0
Annotations:       meta.helm.sh/release-name: kubernetes-dashboard
                   meta.helm.sh/release-namespace: kubernetes-dashboard
Selector:          app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kubernetes-dashboard-api,app.kubernetes.io/part-of=kubernetes-dashboard
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.107.132.221
IPs:               10.107.132.221
Port:              api  8000/TCP
TargetPort:        8000/TCP
Endpoints:         10.244.0.81:8000
Session Affinity:  None
Events:            <none>
...

This command output provides much information about the Service, including its selectors, ports, and endpoint associations. However, let’s focus on a couple of common misconfigurations that can cause the error we’re troubleshooting.

First, let’s double-check the selector labels in the Service configuration. These labels are like tags the Service uses to find the right pods.

If these labels don’t match the labels on the actual Kubernetes Dashboard pods, the Service won’t be able to connect to them.

Second, we must ensure the Kubernetes Dashboard pods have the correct labels. If the pods are missing labels or have mismatched ones, the Service won’t be able to find them.

Hence, by inspecting the Service configuration and comparing it to the pod labels, we can spot and fix any misconfigurations causing the problem.

4. Troubleshooting Steps

If the preliminary checks haven’t uncovered the issue, it’s time to roll up our sleeves and go deeper into troubleshooting.

4.1. Ensure Pods Are Scheduled Correctly

When the Kubernetes Dashboard pods aren’t running, scheduling issues might prevent them from starting. To investigate further, we can use the kubectl describe command to get detailed information about a specific pod:

$ kubectl describe pod <pod_name> -n kubernetes-dashboard

Scheduling failures often stem from node taints, which act as restrictions on which pods can run on a node. If a node has a taint that the pod doesn’t tolerate, Kubernetes won’t schedule it there.

Additionally, insufficient resources on the nodes can prevent pods from starting if insufficient CPU or memory is available.

Lastly, conflicting affinity or anti-affinity rules, which control pod placement based on labels, can also lead to scheduling issues.

Therefore, we can pinpoint the cause of scheduling failures and take corrective action by examining the kubectl describe output. Our focus should be on the TaintsEvents, and Affinity sections.

4.2. Check for Network Issues

Network problems within the cluster can disrupt communication between pods and services. This can lead to the “no endpoints available for service kubernetes-dashboard” error.

Let’s begin by verifying the status of our nodes:

$ kubectl get nodes

Next, let’s inspect the core Kubernetes services in the kubernetes-dashboard namespace to confirm their accessibility:

$ kubectl get svc -n kubernetes-dashboard

Finally, let’s examine the status and logs of our CNI (Container Network Interface) plugin responsible for pod networking. Most CNI plugins have their own pods in the kube-system namespace.

We can list these pods and check their status:

$ kubectl get pods -n kube-system -l k8s-app=<cni_plugin_name>

However, if any issues surface, reviewing the CNI pod logs may provide more details:

$ kubectl logs <cni_pod_name> -n kube-system

Hence, we can identify and resolve network-related issues affecting the Kubernetes Dashboard’s functionality by checking these components.

4.3. Verify Pod Health and Logs

When troubleshooting the “no endpoints available for service kubernetes-dashboard” error, it’s crucial to examine the health and logs of the Kubernetes Dashboard pod itself.

Even if the pod runs, internal issues might prevent it from establishing endpoints.

To gain insights into the pod’s operations and potential problems, we can access its logs using the following command:

$ kubectl logs <dashboard-pod-name> -n kubernetes-dashboard

Common log patterns often point to specific issues. For instance, logs indicating a failure to start could be due to configuration errors or missing dependencies.

Errors related to connecting to the Kubernetes API server might suggest network issues or incorrect configurations.

4.4. Reconfigure or Recreate the Dashboard Deployment

We can reconfigure or recreate the Kubernetes Dashboard deployment to resolve issues. Let’s start by removing all relevant deployments in the kubernetes-dashboard namespace:

$ kubectl delete deployment <deployment_name> ... -n kubernetes-dashboard

Once we’ve deleted the old deployments, we have two redeployment options— Helm or a YAML manifest.

To redeploy using Helm, first, let’s add the Kubernetes Dashboard repository:

$ helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

Next, let’s deploy a Helm release using the kubernetes-dashboard chart:

$ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

Alternatively, for YAML manifest-based deployment, we can use the kubectl apply command:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

After completing the redeployment, confirming that the pods are running smoothly and the service configuration is correct is essential.

This approach enables us to ensure our Kubernetes Dashboard functions properly and resolves common issues effectively.

5. Conclusion

In this article, we’ve explored how to troubleshoot the “no endpoints available for service kubernetes-dashboard” error in Kubernetes.

We covered checking pods and service statuses, diagnosing network issues, and recreating the Dashboard deployment using Helm or YAML manifests.

These troubleshooting steps help ensure that our Kubernetes Dashboard functions properly. This enables us to manage and monitor our cluster efficiently.

In addition, applying these techniques strengthens our ability to maintain a stable Kubernetes environment and quickly resolve common issues that may arise.