1. Overview

We know Kubernetes (or K8s) as a portable open-source platform for managing containerized workloads and services. It helps to orchestrate and manage applications in pods in a secured cluster. It has also efficiently improved tasks like zero downtime for redeployment or containers’ self-healing. Its proficiency is related to cloud automation, and its use for local development or low resources environments is still less applicable.

If we want to use K8s, for example, in embedded systems or quickly set up a local Kubernetes cluster with nodes, we might want to look at K3s.

In this tutorial, we’ll discuss the main features of K3s and make a simple cluster example.

2. K3s : A Lightweight K8s

K3s is CNCF-certified Kubernetes distribution and Sandbox project designed for low-resource environments. Rancher Labs maintains K3s.

Overall, K3s offers a Kubernetes cluster setup with less overhead but still integrates with most of K8s’ architecture and features.

Here’s what makes K3s a lightweight distribution:

K3s packages the standard Kubernetes components in a single binary of less than 100 MB. This has been done by removing extra drivers, optional volume plugins, and third-party cloud integrations.

It contains fewer parts that aren’t necessary for installing and running Kubernetes. However, we can still integrate with a cloud provider such as AWS or GCP using add-ons.

K3s should be able to run in a Linux OS with at least 512M of RAM (although 1GB is recommended) and one CPU.

Although K3s is a lighter version of Kubernetes, it doesn’t change how Kubernetes works at its core. K3s architecture consists of a master server and agents (or worker nodes) running in a cluster. It still has CoreDNS and Ingress Controller as part of the core Networking. It has a built-in SQLite database to store all the server information.

Nevertheless, if we are looking for a high-availability server, we can plug in an external database such as ETCD, MySQL, or Postgres.

Flannel comes as the default CNI plugin for cluster networking.

Finally, being a fully-certified K8s version, we can write our YAML to operate against a K3s cluster as we would with K8s. This applies, for example, when we manage a workload or define pods’ networking with Services and Load Balancing. We’ll use kubectl to interact with a cluster.

3. Setup

Let’s look at how to install K3s, access the cluster and add nodes to the master.

3.1. Installation

A basic installation command is simple:

$ curl -sfL https://get.k3s.io | sh -

This executes a script from https://get.k3s.io./ and runs K3s as a service in our Linux host.

As an alternative, we can download a release and install it. Either way, there are options for a server configuration we can mix with environment variables.

For example, we might want to disable Flannel and use a different CNI provider.

We can do it by running the script:

$ curl -sfL https://get.k3s.io | sh -s - --flannel-backend none

In case we have already installed the K3s binary, we can prefix environment variables to the command line:

$ INSTALL_K3S_EXEC="--flannel-backend none" k3s server 

3.2. Cluster Acces

By default, K3s will install a config file in the /etc/rancher/k3s directory. After the installation, similarly to K8s, we need to define a configuration file location.

We can make K3s point to the configuration file by exporting an environment variable:

$ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

As an alternative, we can define the config file in our home directory where K8s points by default:

$ mkdir -p ~/.kube
$ sudo k3s kubectl config view --raw | tee ~/.kube/config
$ chmod 600 ~/.kube/config

We can check that our cluster is running:

$ kubectl get nodes
NAME              STATUS   ROLES                  AGE    VERSION
<cluster-name>   Ready    control-plane,master   4d3h   v1.25.6+k3s1

Notably, we can see that the control plane will run together with the master node.

Let’s now have a look at which containers (pods) get created:

$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS             RESTARTS         AGE
kube-system            helm-install-traefik-crd-6v28l               0/1     Completed          0                4d2h
kube-system            helm-install-traefik-vvfh2                   0/1     Completed          2                4d2h
kube-system            svclb-traefik-cfa7b330-fkmms                 2/2     Running            10 (8h ago)      4d2h
kube-system            traefik-66c46d954f-2lvzr                     1/1     Running            5 (8h ago)       4d2h
kube-system            coredns-597584b69b-sq7mk                     1/1     Running            5 (8h ago)       4d2h
kube-system            local-path-provisioner-79f67d76f8-2dkkt      1/1     Running            8 (8h ago)       4d2h

We can see a list of available pods over the cluster.

We can see a basic K3s setup composed by:

Instead of running components in different processes, K3s will run all in a single server or agent process. As it is packaged in a single file, we can also work offline, using an Air-gap installation. Interestingly, we can also run K3s in Docker using K3d.

3.3. Adding Nodes

If we want to add nodes to our cluster, we only need to execute the same command pointing to the node host:

$ curl -sfL https://get.k3s.io | K3S_URL=https://<node-host>:6443 K3S_TOKEN=mynodetoken sh -

K3S_TOKEN is stored locally:

$ cat /var/lib/rancher/k3s/server/node-token

Once the worker nodes join the master, the control plane identifies and schedules jobs for the node.

4. Cluster Example

Let’s make a simple cluster example in which we will install an Nginx image.

Let’s start by creating the cluster as mentioned earlier:

$ curl -sfL https://get.k3s.io | sh -

First, let’s create a deployment from an Nginx image with three replicas available on port 80:

$ kubectl create deployment nginx --image=nginx --port=80 --replicas=3

Next, let’s check out our pods:

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-ff6774dc6-ntxv6   1/1     Running   0          17s
nginx-ff6774dc6-qs4r6   1/1     Running   0          17s
nginx-ff6774dc6-nbxmx   1/1     Running   0          17s

We should see three running containers.

Pods are not permanent resources and get created and destroyed constantly. Therefore, we need a Service to map the pods’ IPs to the outer world dynamically.

Services can be of different types. We’ll choose a ClusterIp:

$ kubectl create service clusterip nginx --tcp=80:80

Let’s have a look at our Service definition:

$ kubectl describe service nginx
Name:              nginx
Namespace:         default
Labels:            app=nginx
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.238.194
IPs:               10.43.238.194
Port:              80-80  80/TCP
TargetPort:        80/TCP
Endpoints:         10.42.0.10:80,10.42.0.11:80,10.42.0.9:80

We can see the Endpoints corresponding to the pods (or containers) addresses where we can reach our applications.

Services don’t have direct access. An Ingress Controller is usually in front of them for caching, load balancing, and security reasons, such as filtering out malicious requests.

Finally, let’s define a Traefik controller in a YAML file. This will route the traffic from the incoming request to the service:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx
            port:
              number: 80

We can create the ingress by applying this resource to the cluster:

$ kubectl apply -f <nginx-ingress-file>.yaml

Let’s describe our ingress controller:

$ kubectl describe ingress nginx
Name:             nginx
Labels:           <none>
Namespace:        default
Address:          192.168.1.103
Ingress Class:    traefik
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   nginx:80 (10.42.0.10:80,10.42.0.11:80,10.42.0.9:80)
Annotations:  ingress.kubernetes.io/ssl-redirect: false

Our Backends will run the Nginx application on the available Services’ ports.

We can now access the Nginx home page with a GET request to the 192.168.1.103 address from our host or a browser.

We might want to add a Load Balancer to the ingress controller. K3s uses ServiceLB as default.

5. How K8s and K3s Differ

The most significant difference between K3s and K8s is the packaging. K3s is a less than 100MB single-packaged binary. K8s has multiple components running as processes.

Furthermore, being a lighter version, K3s can spin up a Kubernetes cluster in seconds. We can run operations faster and with lower resources.

K3s supports AMD64, ARM64, and ARMv7 architectures, among others. That means we can run it anywhere, for example, in a Raspberry PI Zero. K3s can also handle environments with limited connectivity.

We might also want to use K3s to test CI pipelines and check if our cluster runs smoothly.

We have a quicker start and fewer commands to grasp when learning K3s. The effort to start working with it is less than K8s, for example, if we don’t already have a background using distributed clustering.

However, we should still consider K8s for complex clusters or heavy-duty workloads. It is true K3s offers a high-availability option, but it requires more work to plug in, for example, different databases or integrating cloud providers.

In case of a decision between K3s over K8s, it will probably come down to the resources available. However, K3s is a choice for continuous integration testing or setting up a Kubernetes cluster for a project kick-start.

6. Conclusion

In this article, we have seen K3s as a lightweight distribution and a valid alternative to K8s. It requires low resources and has a quick setup. We have seen this while creating an example of a simple cluster.

Nevertheless, it still has full compatibility with K8s and is also a potential use for a high-availability server. Finally, we discussed how K8s and K3s differ and when we should prefer using K3s.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.