1. Overview

In this tutorial, we’ll learn how to deploy a highly available Redis cluster onto the Kubernetes cluster using the Helm chart. Additionally, we’ll work through a demonstration to see the failover mechanism in action.

2. Redis Replication and Sentinel

High availability (HA) is a concept that describes the ability of a system to remain operational and accessible for a high percentage of time. Usually, an HA system consists of multiple standby instances, known as replicas. The replicas replicate updates from the primary to keep their states close to the primary. When the primary fails, one of the replicas can take up the primary role with minimal data loss. Many systems such as PostgreSQL, Redis, and Elasticsearch offer such a HA deployment.

2.1. High Availability in Redis

In the context of Redis, deploying an HA Redis service involves running multiple replicas in addition to a primary instance. The replicas connect to the primary to receive write changes. The combination of a primary and multiple replicas forms a HA Redis cluster, which we’ll refer to as a cluster in the remainder of the article.

Notably, we’ll use the uncapitalized form of the word “cluster” to refer to the actual Redis cluster that consists of multiple replicas. This should not be confused with the Redis Cluster software.

2.2. Redis Sentinel

The Redis Sentinel, or Sentinels for short, is a separate process that runs alongside the Redis instance. For a Kubernetes deployment, the Sentinel usually runs as a separate container within the same pod as the Redis container.

The Sentinel on each Redis instance triggers the failover when the existing primary fails. It works by constantly monitoring the participants’ health in the cluster. When a failover happens, the remaining Sentinels in the cluster agree and promote an existing replica as the new primary.

3. Deploying High Availability Redis

The Redis Helm chart offers the most convenient way to roll out a HA Redis setup. Instead of deploying the different components separately, the Helm chart ensures all the required components are deployed in a unit. Additionally, the Redis Helm chart provides various properties that we can configure for the installation.

3.1. Getting the Redis Helm Chart

To install the Redis Helm chart, we’ll first need to get the helm binary using the apt-get command:

$ sudo apt-get install -y helm

Then, we run the helm repo add command to add the Bitnami Helm Chart repository:

$ helm repo add bitnami https://charts.bitnami.com/bitnami

After we’ve installed the Bitnami Helm chart repository, we’ll now have access to the Bitnami Redis Helm chart.

3.2. Configuring Redis Helm Chart Installation

The default configuration of the Bitnami Redis Helm chart deploys a Redis StatefulSet with the Sentinel disabled by default. Let’s customize the installation using values.yaml to enable the deployment of the Redis Sentinel process:

$ cat > values.yaml<<EOF
global:
  redis:
    password: password
replica:
  replicaCount: 2
sentinel:
  enabled: true
EOF

The command above uses the cat command to write the heredoc into the values.yaml file. In the file, we set the replicaCount to two to deploy two replicas instances in addition to the primary. Importantly, we need a minimum of two replicas per primary to ensure the deployment is indeed HA. Additionally, we deploy the Sentinel process for our Redis instance by setting the sentinel.enabled to true.

3.3. Installing the Redis Helm Chart

To install the Redis Helm chart onto our Kubernetes cluster, we run the helm install command:

$ helm install redis-sentinel bitnami/redis --values values.yaml

The command above installs the bitnami/redis Helm chart and names the release redis-sentinel. Additionally, we pass the customization file values.yaml to the installation using the –values flag.

3.4. Checking the Deployment

We can check the installation of the Helm chart by checking for the presence of a Redis StatefulSet:

$ kubectl get statefulset
NAME                  READY   AGE
redis-sentinel-node   3/3     9m36s

The StatefulSet, in turn, deploys three pods. The three pods consist of one primary pod and two replicas pods, as per our replicaCount specification:

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
redis-sentinel-node-0   2/2     Running   0          10m
redis-sentinel-node-1   2/2     Running   0          8m59s
redis-sentinel-node-2   2/2     Running   0          8m33s

Importantly, we must ensure the sentinel is deployed alongside our Redis instance. To list the containers within all the pods, we can use the kubectl top pod –containers command:

$ kubectl top pod --containers
POD                     NAME       CPU(cores)   MEMORY(bytes)
redis-sentinel-node-0   redis      14m          8Mi
redis-sentinel-node-0   sentinel   13m          7Mi
redis-sentinel-node-1   redis      16m          8Mi
redis-sentinel-node-1   sentinel   15m          7Mi
redis-sentinel-node-2   redis      19m          8Mi
redis-sentinel-node-2   sentinel   14m          7Mi

As we can see, there’s a sentinel container running within each pod, alongside the Redis container.

4. Highly Available Redis Cluster in Action

Let’s test our HA Redis cluster to see the failover in action. We’ll start a simple client script that continuously increments a counter.

Then, we’ll simulate a failure on the primary and see the failover in action. Critically, we’ll verify that the client only sees minimum disruption during the failure.

4.1. Redis Client Script

Firstly, we’ll create a pod that contains the Redis client for interfacing with the Redis instance:

$ kubectl run redis-client --restart='Never' --image redis --command -- sleep infinity

Notably, we run the sleep infinity command to keep the pod running. Then, we can drop into the pod using the kubectl exec command, starting a bash shell within the pod:

$ kubectl exec --tty -i redis-client -- bash

The option –tty and -i will start an interactive shell and attach the input and output stream to our terminal. Then, we can create a client that increments the mycounter counter on a 1-second interval:

$ cat client-increment.sh
#!/bin/bash
export REDISCLI_AUTH=password
while true
do
    CURRENT_PRIMARY=$(redis-cli -h redis-sentinel -p 26379 SENTINEL get-master-addr-by-name mymaster)
    CURRENT_PRIMARY_HOST=$(echo $CURRENT_PRIMARY | cut -d' ' -f1 | head -n 1)
    echo "Current master's host: $CURRENT_PRIMARY_HOST"
    redis-cli -h ${CURRENT_PRIMARY_HOST} -p 6379 INCR mycounter
    sleep 1
done

Let’s break down the script. We first export the environment variable, REDISCLI_AUTH to set the authentication information for the redis-cli command.

Then, we use the SENTINEL get-master-addr-by-name Redis command to get the current primary instance’s address, where mymaster is the default name for the primary Redis instance. Importantly, we specify the Sentinel process’ port 26739 when retrieving the current primary’s address. This is because the Sentinel process maintains the information about the current primary, not the Redis instance.

Then, we process the output of the SENTINEL get-master-addr-by-name using the cut and head command to get the primary instance’s hostname. Finally, we use the redis-cli to increment the mycounter by one.

When we run the script, we should see the counter incremented every second:

$ ./client-increment.sh 
Current master's host: redis-sentinel-node-0.redis-sentinel-headless.default.svc.cluster.local
(integer) 1
Current master's host: redis-sentinel-node-0.redis-sentinel-headless.default.svc.cluster.local
(integer) 2
...

4.2. Primary Failure and Failover

We can simulate failure on the primary instance with the client script running. The goal is to observe the failover in action and how it affects the client.

To simulate the failover, we’ll delete the primary instance’s pod. With a separate terminal, we’ll delete the redis-sentinel-node-0 pod using the kubectl delete pods command:

$ kubectl delete pods redis-sentinel-node-0
pod "redis-sentinel-node-0" deleted

Let’s look at the client’s output while the failure happens:

...
Current master's host: redis-sentinel-node-0.redis-sentinel-headless.default.svc.cluster.local
(integer) 18
Current master's host: redis-sentinel-node-0.redis-sentinel-headless.default.svc.cluster.local
(integer) 19
Current master's host: redis-sentinel-node-0.redis-sentinel-headless.default.svc.cluster.local
Error: Server closed the connection
Current master's host: redis-sentinel-node-2.redis-sentinel-headless.default.svc.cluster.local
(integer) 20
Current master's host: redis-sentinel-node-2.redis-sentinel-headless.default.svc.cluster.local
(integer) 21
...

We can see that the client faces an error because of the primary Redis instance shutdown. However, the subsequent line shows we’re now interacting with the new primary, redis-sentinel-node-2. Furthermore, the same counter has maintained the value from the initial primary instance.

5. Conclusion

In this tutorial, we’ve briefly learned about the highly available concept in computing. Then, we’ve seen how Redis can be made highly available by having a few replicas. Additionally, the Redis Sentinel process helps monitor and triggers failover when the primary fails.

We’ve then learned about the Bitnami Redis Helm chart that deploys an HA Redis cluster. Finally, we’ve seen the failover in action using an example.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.