Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: December 29, 2024
As containerized applications grow in size and complexity, it becomes challenging to handle their deployment on Kubernetes by dealing with each YAML manifest. It also becomes more difficult to deploy them across different environments while managing each environment’s configuration.
Helm addresses these challenges by packaging the different application components in a single object called a chart. This makes it easier to distribute and deploy Kubernetes applications. Moreover, Helm provides a templating feature that allows for easier management of deployment configurations across multiple environments.
In this tutorial, we’re going to cover the fundamentals of Helm deployments. We’ll explain how Helm works, the components of a Helm chart, and the concept of Helm releases. Finally, we’ll show how to list the Kubernetes resources that belong to a specific Helm deployment.
Helm is the package manager for Kubernetes. It simplifies the deployment of applications on Kubernetes by grouping all the related Kubernetes resources (Deployment, Service, ConfigMap, and more) of an application in a single package called a chart. We can then deploy all these components on Kubernetes with a single command.
Helm also takes care of managing dependencies. If we’re deploying a chart that has a dependency on another chart, it will automatically install the other chart for us.
We can compare Helm to normal OS package managers where we use a simple command to install, upgrade, or remove a specific software, and the package manager handles all the underlying details. Helm does the same thing, but for Kubernetes instead of a normal OS.
There are multiple ways to install Helm, but the easiest method is to use the installer script:
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
The above command downloads and executes the Helm installer script.
Let’s check the Helm version to verify our installation:
$ helm version
version.BuildInfo{Version:"v3.16.3", GitCommit:"cfd07493f46efc9debd9cc1b02a0961186df7fdf", GitTreeState:"clean", GoVersion:"go1.22.7"}
We can see from the above command that Helm is installed successfully with version 3.16.3.
A Helm chart is a collection of files that have a specific structure inside a chart directory. Each file in the structure serves a specific purpose for deploying the application on Kubernetes. Let’s check a typical Helm chart file structure:
$ ls
Chart.lock README.md files values.schema.json
Chart.yaml charts templates values.yaml
In the above snippet, we used the ls command from inside a chart directory, so we can see the contents of a Helm chart. Let’s explain the most important files in this structure.
The Chart.yaml file contains information about the chart and the application it deploys. This information includes the chart name, version, type, dependencies, apiVersion, appVersion, and other metadata. We can think of the Chart.yaml file as the descriptor of the chart. So, it allows the user to understand what’s inside the chart.
Let’s view our Chart.yaml file contents:
$ cat Chart.yaml
apiVersion: v2
name: apache
appVersion: 2.4.62
dependencies:
- name: common
repository: oci://registry-1.docker.io/bitnamicharts
tags:
- bitnami-common
version: 2.x.x
description: Apache HTTP Server is an open-source HTTP server. The goal of this project
is to provide a secure, efficient and extensible server that provides HTTP services
in sync with the current HTTP standards.
-------------------- OUTPUT TRIMMED ----------------------
As we can see in the above snippet, the Chart.yaml file shows that this chart is for deploying an apache server. Other information in the file includes the appVersion, which is the apache version 2.4.62, and the dependencies for this chart, which is another chart with the name common under the oci://registry-1.docker.io/bitnamicharts repository.
The templates directory of a chart is where all the Kubernetes manifests for the application exist. It includes the YAML files for the application resources like Deployments, Services, and other objects:
$ cd templates/
$ ls
NOTES.txt deployment.yaml metrics-svc.yaml serviceaccount.yaml
_helpers.tpl extra-list.yaml networkpolicy.yaml servicemonitor.yaml
configmap-vhosts.yaml hpa.yaml pdb.yaml svc.yaml
configmap.yaml ingress.yaml prometheusrules.yaml tls-secrets.yaml
We can see here that the files under the templates folder represent different Kubernetes objects like deployment.yaml, svc.yaml, and ingress.yaml. These are the Kubernetes resources that will be created when deploying the chart.
Each of these files isn’t a complete Kubernetes manifest, but rather, they’re templates that have the same structure while using placeholder values that can be dynamically injected.
Let’s check one of the files under the templates directory to understand it more:
$ cat deployment.yaml
apiVersion: {{ include "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ include "common.names.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
--------------- OUTPUT TRIMMED -------------
Inside the deployment.yaml file, we can see a structure similar to a normal Deployment file. However, some values are replaced by a placeholder code.
For example, in the replicas field, we usually see a number that represents the desired count of Pods. But here, we see the {{ .Values.replicaCount }} expression. This is the templating feature of Helm. This expression can now be replaced dynamically, and we can pass the number of replicas from an external configuration.
This external configuration is typically set in a values.yaml file.
The values.yaml file is the default source for passing configuration values to the chart. These values replace the placeholder code in the chart templates to build a complete Kubernetes manifest that’s passed to the Kubernetes API server. Let’s check the contents of the values.yaml file:
$ cat values.yaml
image:
registry: docker.io
repository: bitnami/apache
tag: 2.4.62-debian-12-r12
replicaCount: 1
-------------------- OUTPUT TRIMMED -------------------
As we can see, the file is in plain YAML format, with a set of key-value pairs in hierarchical structure. These values replace their corresponding placeholder code in the chart templates.
For example, in the previous deployment.yaml file, we’ve seen the {{ .Values.replicaCount }} expression. This means that Helm will look for a value with the name replicaCount through the Values object.
As we mentioned, the values.yaml file is the default source for these values. In other words, the Values object will get the replicaCount from the values.yaml file. So, at the end, we’ll have the expression replaced by 1, which is the value for the replicaCount entry in the values.yaml file.
In simple terms, a release is a deployed instance of a chart. So, when we have our chart as a running application on the cluster, we call it a release.
We can create multiple releases from the same chart by deploying multiple instance of the chart. Each release can have multiple versions if we apply changes to our chart and deploy it.
Let’s now try to deploy our chart into Kubernetes:
$ helm install my-apache-app .
NAME: my-apache-app
LAST DEPLOYED: Tue Dec 3 12:13:06 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: apache
CHART VERSION: 11.2.22
APP VERSION: 2.4.62
The above command deploys our chart with its components into Kubernetes. Let’s explain each part of the command:
Now, let’s verify the Helm releases we have in the cluster:
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-apache-app default 1 2024-12-03 12:42:15.695762661 +0000 UTC deployed apache-11.2.22 2.4.62
The helm ls command lists the available releases in the cluster. As we can see, our release is shown with the status of deployed.
Now, we might need to know what Kubernetes objects were created by a specific release. For example, if it creates a Deployment, Service, or ConfigMap, we need to group these to identify the components of the application.
By default, Helm adds a set of labels to the resources that it creates. We can list the objects running on the cluster and use these labels to filter which of them belongs to our Helm release.
The first label we can use is app.kubernetes.io/managed-by=Helm, which indicates that this resource was created by Helm. The other label is app.kubernetes.io/instance=release-name, which indicates the release name this resource belongs to.
Let’s use the kubectl get all command and use these labels as filters:
$ kubectl get all -l='app.kubernetes.io/managed-by=Helm,app.kubernetes.io/instance=my-apache-app'
NAME READY STATUS RESTARTS AGE
pod/my-apache-app-98dbc4946-h5k4d 1/1 Running 0 36m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-apache-app LoadBalancer 10.43.251.153 <pending> 80:31928/TCP,443:31148/TCP 36m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-apache-app 1/1 1 1 36m
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-apache-app-98dbc4946 1 1 1 36m
In the above command, the -l option filters the resources that contain only the specified labels. We’ve provided both labels that we discussed above, and we’ve used our release name, which is my-apache-app. We can see the resources that belong to our release are shown in the output.
In this article, we’ve covered the fundamentals of Helm and explained how to identify Kubernetes objects linked to a specific Helm deployment.
We discussed the benefits of Helm as a package manager for Kubernetes and the different components of a Helm chart. We also showed how to create a Helm release, which is a running instance of a Helm chart.
Finally, we used the Helm default labels to filter the resources created by a specific Helm release.