Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: November 29, 2024
Kubernetes has revolutionized how we deploy and manage containerized applications. It’s a powerful orchestration system that automates the deployment, scaling, and operations of our apps.
To interact with Kubernetes clusters, we rely on kubectl, a versatile command-line tool. kubectl acts as our bridge to the Kubernetes API server, the central brain of the cluster. But what happens when this bridge breaks down?
Typing a simple command like kubectl get nodes and being greeted with the error message “server doesn’t have a resource type ‘nodes'” can be quite frustrating — notably, since nodes are the very foundation of any Kubernetes cluster, acting as the worker bees that run our applications.
In this tutorial, we’ll break down this error, explore its common causes, and provide clear solutions.
Before we jump into troubleshooting, it’s essential to understand what this error message actually means. Let’s say we run the following command:
$ kubectl get nodes
And we get the following output:
error: the server doesn’t have a resource type “nodes”
Essentially, this indicates that kubectl can’t find the “nodes” resource type on the Kubernetes API server. But why is this happening? After all, “nodes” are the foundation of any Kubernetes cluster.
Therefore, if kubectl can’t even list the nodes, it usually points to a communication breakdown between our command-line tool and the cluster’s API server. It’s like kubectl is trying to make a phone call to the API server, but the call isn’t going through.
Now, there are several reasons why this communication might be failing. Let’s explore some of these reasons and possible solutions.
Now that we’ve understood what the error means, let’s explore the common reasons in detail and how to fix them.
An incorrect kubeconfig file often leads to the “server doesn’t have a resource type ‘nodes'” error. The kubeconfig file tells kubectl how to connect to the cluster. If it’s misconfigured, kubectl can’t communicate with the Kubernetes API server, resulting in errors.
First, let’s check which cluster kubectl is currently targeting:
$ kubectl config current-context
minikube
This command displays the current context kubectl is using. If it doesn’t match the cluster we’re trying to manage, we might be pointing it to the wrong cluster.
We can see all available contexts by running the command:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube default
my-cluster my-cluster my-user
The asterisk (*) indicates the current context. If the desired context isn’t selected, we can switch to it using:
$ kubectl config use-context <desired_context>
Sometimes, the kubeconfig file itself is outdated or corrupted. By default, it’s located at ~/.kube/config. We can verify that this file exists:
$ ls -l ~/.kube/config
-rw------- 1 bluecrane bluecrane 836 Nov 23 10:51 /home/bluecrane/.kube/config
If the file is missing or empty, we need to obtain the correct kubeconfig file from our cluster.
For clusters created with kubeadm, we can copy the admin kubeconfig file:
$ sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
Let’s ensure that we have the right permissions:
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now, let’s check the content of our kubeconfig file:
$ kubectl config view
We have to look for three sections in the output of the above command:
Furthermore, if we have multiple kubeconfig files or suspect that kubectl is using the wrong one, we can check the KUBECONFIG environment variable:
$ echo $KUBECONFIG
If it’s set to an unexpected path, we can unset it:
$ unset KUBECONFIG
Alternatively, we can explicitly tell kubectl which configuration file to use:
$ kubectl --kubeconfig=$HOME/.kube/config get nodes
After ensuring our kubeconfig file is correctly configured, we can try running kubectl get nodes again. If everything is in order, we should see a list of nodes in the cluster.
Authentication problems can prevent kubectl from accessing our cluster, even with a correct kubeconfig file. If our credentials are missing, expired, or incorrect, we’ll likely encounter this error.
First, let’s check our kubeconfig file (~/.kube/config) and look for the users section, where we can find the path to the credentials:
...
users:
- name: minikube
user:
client-certificate: /home/bluecrane/.minikube/profiles/minikube/client.crt
client-key: /home/bluecrane/.minikube/profiles/minikube/client.key
Let’s check the certificate’s expiration date:
$ openssl x509 -noout -enddate -in /home/bluecrane/.minikube/profiles/minikube/client.crt
notAfter=Nov 23 10:51:08 2027 GMT
If the certificate has expired, we need to renew it or obtain new credentials.
Additionally, we need to ensure that our user has the necessary permissions to access cluster resources. Kubernetes uses Role-Based Access Control (RBAC) to manage permissions. If our user lacks the appropriate roles or cluster roles, we won’t be able to list nodes.
Let’s verify we have the necessary permissions:
$ kubectl auth can-i get nodes --as=<username>
If the command returns no, we lack permission to list nodes, and we need to request appropriate permissions from the administrator.
Sometimes, the error occurs because our Kubernetes cluster isn’t running properly. Even with correct configuration and authentication, if cluster components are down, kubectl can’t interact with them.
This situation often leads to the error message: “server doesn’t have a resource type ‘nodes'”.
Now, let’s check if our cluster is operational and see how to fix a few common issues.
Assuming we’re using a local cluster like Minikube, we can start by checking its status:
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
If any component shows Stopped or Paused, we should start Minikube:
$ minikube start
For clusters managed by Kubeadm, we must check the status of control plane components on the master node.
Additionally, for systemd-based systems, we can check the status of essential services:
$ sudo systemctl status kube-apiserver
$ sudo systemctl status kube-controller-manager
$ sudo systemctl status kube-scheduler
If any service isn’t running, we can start it using:
$ sudo systemctl start <service_name>
Furthermore, for containerized control plane components, we can list the pods in the kube-system namespace:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6f6b679f8f-9whsz 1/1 Running 0 22h
etcd-minikube 1/1 Running 0 22h
kube-apiserver-minikube 1/1 Running 0 22h
kube-controller-manager-minikube 1/1 Running 0 22h
kube-proxy-qwcxn 1/1 Running 0 22h
kube-scheduler-minikube 1/1 Running 0 22h
storage-provisioner 1/1 Running 7 (4m24s ago) 22h
If any pods are in CrashLoopBackOff or Error state, then there are issues to fix.
To view logs of a failing component:
$ kubectl logs -n kube-system <pod_name>
Next, let’s check the status of our nodes:
$ kubectl get nodes
If a node shows NotReady, we can restart the kubelet service on that node:
$ sudo systemctl restart kubelet
After a few moments, we can check the node status again.
There are times when the error occurs simply because of a typo or misunderstanding of resource types and namespaces. Kubernetes has specific resource names, and using an incorrect one can lead to the message: “server doesn’t have a resource type ‘nodes'”.
Let’s explore how to verify we’re using the correct resource types and namespaces.
First, let’s ensure we’re typing the resource name correctly. It’s easy to make a small typo that causes big headaches. For example, we may accidentally type kubectl get nodez instead of kubectl get nodes. This will output an error message.
Furthermore, Kubernetes uses namespaces to partition resources. If we’re trying to access a resource in a different namespace, Kubernetes may not find it.
For instance, if we execute:
$ kubectl get pods
By default, this command lists pods in the default namespace. If our resources are in another namespace, we won’t see them.
To specify a namespace, we can use the –namespace flag:
$ kubectl get pods --namespace=<our_namespace>
We can also list resources across all namespaces:
$ kubectl get pods --all-namespaces
However, for cluster-wide resources like nodes, the namespace doesn’t apply because nodes are non-namespaced resources.
In this article, we’ve explored common causes of the error “server doesn’t have a resource type ‘nodes'” and provided practical solutions to fix it.
We should remember that regular maintenance and double-checking configurations can prevent many common issues, keeping our development workflow efficient.