1. Introduction

A Kubernetes namespace may be stuck in the terminating state for various reasons. Some of these reasons include hanging resources, finalizers, and external dependencies. While we can force namespace deletion, it’s better to rule out various possible causes first.

In this tutorial, we discuss how to delete a Kubernetes namespace stuck in the terminating state.

2. Debugging a Stuck Namespace

Before we attempt to force a namespace deletion, we’ll check its status using kubectl get:

$ kubectl get ns <namespace>

For a verbose output, we’ll add the -o flag alongside an output format:

$ kubectl get ns <namespace> -o yaml

We’ll pay attention to the conditions field of the output as it can give us a hint on the cause of the issue.

But if that isn’t helpful, we’ll list all the resources in the namespace:

$ kubectl api-resources --verbs=list --namespaced=true -o name \
| xargs -n 1 kubectl get --ignore-not-found --show-kind -n <namespace>

If we find a lingering resource in the hanging namespace, our first attempt at resolving the issue would be to try to delete said resource.

However, if we find no resource in the namespace, we can check for events in the namespace using kubectl events:

$ kubectl events -n <namespace>

We can restrict the output to certain event types using –type:

$ kubectl events -n <namespace> --type=Warning

Checking the events in the namespace can offer us insight into the reason we cannot complete the deletion. But if that still doesn’t do it, we could check the controller-manager logs to ensure we do not miss any info:

$ kubectl get pods -o name -n kube-system  | grep controller-manager | xargs -n 1 kubectl logs -n kube-system 

In the command above, we retrieved a list of pods in the kube-system namespace. Then, using grep, we restricted the output to return only the name of the controller manager pod. After that, we passed the name of the controller manager pod as an argument to kubectl logs using xargs.

If the controller manager logs offer us no pointers as to what the problem may be, we may proceed with a forced deletion. Then again, we may check for external dependencies before we do that.

3. Force-Deleting a Kubernetes Namespace Stuck in Terminating State

To force the deletion of a hanging namespace, we can edit the namespace’s resource using kubectl edit:

$ kubectl edit ns <namespace>

In the ensuing editor, we’ll empty the finalizers array. To do this, we can remove all items in the array. Alternatively, we can just assign the null value to finalizers.

After that, we’ll write the changes and close the file.

On removing finalizers from the namespace resource, the namespace deletion will continue successfully. Then, we can try confirming deletion by getting the namespace:

$ kubectl get ns <namespace>

We should get an error message that says the “namespaces <namespace> not found”, confirming we’ve successfully deleted the namespace..

In place of editing the resource via an editor, we can patch the namespace using kubectl patch:

$ kubectl patch ns <namespace> -p '{"metadata":{"finalizers": null }}'

4. Conclusion

In this article, we went through some steps to check why a Kubernetes namespace gets stuck while terminating. Then, we discussed how to forcefully delete such namespaces.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.