1. Introduction
While Kubernetes continues to grow in usage, one of the few problems that remain is its resource constraints. Running a fully configured Kubernetes cluster typically requires lots of CPU and memory. This makes it difficult for organizations to build and test software in an ops-like fashion.
That’s where Microk8s comes in. In this article, we’ll look at how MicroK8s can help alleviate some of these problems by allowing us to run full-featured Kubernetes clusters with a small CPU and memory footprint.
2. What Is MicroK8s
MicroK8s is a fully compliant Kubernetes distribution with a smaller CPU and memory footprint than most others. It’s designed from the ground up to provide a full Kubernetes experience for devices with limited computing power and memory.
MicroK8s boasts a number of features:
- Size: Its memory and storage requirements are a fraction of what many full-size Kubernetes clusters require. In fact, it’s designed to run on a single node/computer.
- Simplicity: By installing a bare minimum feature set, MicroK8s makes managing a cluster simple. We can create a fully functional Kubernetes cluster in minutes with as little as one command.
- Up-to-date: MicroK8s pulls all fixes and updates from the core Kubernetes project the same day, ensuring its clusters have the latest available changes almost instantly.
Because of these features, there are a variety of uses cases where MicroK8s is a better choice than a standard Kubernetes deployment:
- Developer workstations: Provisioning new developer workstations can be automated and ensures developers have a proper Kubernetes environment in which to test.
- CI/CD servers: Automating builds with repeatable and fixed execution environments.
- IoT devices: Small memory devices with remote connectivity can run their own Kubernetes clusters.
3. Getting Started with MicroK8s
MicroK8s comes with installers for every major operating system: Windows, Linux, and macOS.
By default, MicroK8s disables most features during installation. Therefore, we must enable the features we want using the microk8s enable command.
Below is a list of common add-ons we may want to enable to get a traditional Kubernetes setup:
- cert-manager: Cloud-native certificate management
- dashboard: The Kubernetes dashboard
- dns: CoreDNS services
- ingress: Ingress controller for external access to services
- metallb: LoadBalancer controller
- metrics-server: A Kubernetes metrics server for API access to service metrics
- prometheus: Prometheus operator for monitoring and logging
- rbac: Role-Based Access Control for authorization
For example, to enable the dashboard and ingress add-ons, we would run:
$ microk8s enable dashboard ingress
4. Using MicroK8s
With MicroK8s installed and configured, let’s take a closer look at using it.
4.1. Checking Status
We can check the status of our Microk8s cluster using the microk8s status command:
$ microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
hostpath-storage # (core) Storage class; allocates storage from host directory
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
observability # (core) A lightweight observability stack for logs, traces and metrics
storage # (core) Alias to hostpath-storage add-on, deprecated
This tells us whether the cluster is running and which features are enabled. We can also use traditional kubelet commands to inspect the cluster:
$ microk8s kubelet get node
NAME STATUS ROLES AGE VERSION
microk8s-vm Ready <none> 1d13h v1.26.1
Note that it’s possible to use the native kubelet command as well. We simply have to generate the client config for kubelet:
$ microk8s kubectl config view --raw > ${HOME}/.kube/config
Finally, we can stop and start the MicroK8s cluster:
$ microk8s stop
Stopped.
$ microk8s start
When running on a laptop or other device without a dedicated power source, the MicroK8s team recommends shutting down the cluster when it isn’t needed in order to conserve power.
4.2. Deploying Applications
With the cluster up and running, we can now deploy applications using a couple of different means.
First, we can use traditional YAML files to deploy workloads:
$ microk8s kubelet apply -f /path/to/deployment.yaml
Additionally, after enabling the Helm feature, we can deploy applications using Helm charts:
$ microk8s helm install elasticsearch elastic/elasticsearch
4.3. Viewing the Dashboard
Assuming we’ve enabled the dashboard add-on, we can view it by first starting a port-forward:
$ microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
We can then view the dashboard using the URL https://localhost:10443. To log in, we need a token or the full kubeconfig:
# Generate a token
$ microk8s kubectl create token default
# Generate kubeconfig
$ microk8s config
Note that the cluster uses a self-signed certificate, which will cause web browser warnings.
4.4. High Availability
MicroK8s bills itself as production-grade, and thus it also supports high availability whenever multiple nodes are available. It’s easy to add a node from the command line:
$ microk8s add-node
This provides all the information we need to start new nodes and join them to the cluster.
From the node that we wish to join to this cluster, run:
$ microk8s join 192.168.64.2:25000/16715886fa58dcf561acbd6df44c614d/14b471cb0bb3
By default, new nodes are workers and run the control plane, although it is possible to add new nodes as just workers. Worker nodes can schedule workloads but do not provide high availability. A minimum of three nodes are needed to run the control plane for high availability.
5. Conclusion
In this article, we’ve looked briefly at MicroK8s, a minimal low-ops production Kubernetes. MicroK8s supports the full set of Kubernetes features and is extensible using a wide range of add-ons. Additionally, its small memory and CPU footprint makes it a good candidate for low CPU and memory environments such as developer workstations and DevOps pipelines.