1. Overview

In this tutorial, we’ll look at the problem of reliably sending requests to all the pods under the same Service resource.

2. Problem Statement

In a typical Kubernetes deployment, we create a Service resource that serves as the proxy to a list of backing pods. This provides a single entry point to a potentially large array of fungible pods. The benefit of this mechanism is that the pods are free to scale horizontally to improve the capability without needing the client to know about each new pod that’s added.

However, sometimes, we want to broadcast an HTTP request to all the pods under the same Deployment resource instead of having our request served by a single pod. For example, consider a Deployment that deploys pods that maintain some in-memory caches. Additionally, these caches require eviction through HTTP calls once in a while to ensure freshness. In this case, we want to ensure that the eviction requests reach every pod of the deployment.

One naïve approach is to send as many requests as there are pods to the Service resource of the pods. In theory, the load-balancing mechanism of the Service resource will ensure that each pod gets the request sequentially.

While simple, this approach is unreliable in the presence of other traffic hitting the same service. Specifically, the other traffic might interleave with our requests, causing some pods to miss out on the broadcast request as it is serving the external traffic.

3. Resolving IP Addresses of All Pods Using Headless Service

A more robust solution to this problem involves a two-step process. Firstly, we’ll resolve the IP addresses of all the pods we want to broadcast the request to. Then, we can iterate over the list of the IP addresses and send the request to each destination. This way, we can be sure that all the pods will receive our request.

For the first step, we can use the headless service to resolve the IP addresses of all the pods. A headless service differs from a normal Service resource in that it doesn’t have a cluster IP address. When resolving a headless service’s domain name, we’ll be given a list of the IP addresses of the pods it’s targeting.

On a high level, the solution involves creating a headless service targeting the pods. Then, we use domain name service (DNS) utility commands such as dig to resolve all the IP addresses of the targeted pods. Finally, we can write a for-loop to send the request to every IP address we’ve gotten from the first step.

Let’s work through an example to demonstrate the idea.

3.1. Demonstration Setup

We’ll create a custom image, web-server, that runs an HTTP server process for our demonstration. The HTTP server serves a single endpoint, /clear-caches. The /clear-caches endpoint returns the current timestamp, and the last time when the /clear-caches endpoint is being called:

$ curl http://localhost:8080/clear-caches
Current Time: 2024-03-23 16:55:17
Last clear: 2024-03-23 16:53:46

Then, we create a Deployment resource to deploy three replicas of that image onto our cluster using the kubectl apply command:

$ kubectl apply -f -<<EOF
> apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-server
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: web-server
        image: web-server:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
> EOF
deployment.apps/web-server-deployment created

In the example above, we use the kubectl apply command with the heredoc of the Deployment resource manifest. At the end of the setup, we should expect three running pods of the web-server container. We can validate that using the kubectl get pods command:

$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
web-server-577f8f7f96-mxppg   1/1     Running   0          21s
web-server-577f8f7f96-jvgdl   1/1     Running   0          21s
web-server-577f8f7f96-4ls65   1/1     Running   0          21s

With the environment set, we can proceed to create a headless service to target the pods.

3.2. Creating a Headless Service

To create a headless service that targets our web-server pods, we can define the manifest as a heredoc and pipe it to the kubectl apply command:

$ kubectl apply -f -<<EOF
> apiVersion: v1
kind: Service
metadata:
  name: web-server-headless
spec:
  selector:
    app: web-server
  clusterIP: None
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
> EOF
service/web-server-headless created

Importantly, we’ll need to set the clusterIP of the Service resource to None to make a Service resource headless. After that, we can retrieve the details of the resource we’ve created using the kubectl get svc command:

$ kubectl get svc web-server-headless
NAME                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
web-server-headless   ClusterIP   None         <none>        8080/TCP   40s

From the output, we can verify that the service created is indeed headless as the CLUSTER-IP of the Service resource is None.

Let’s test our headless service using the dig command. Firstly, we create a client pod using the slongstreet/bind-utils image:

$ kubectl apply -f -<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: client
spec:
  containers:
  - name: bind-utils-container
    image: slongstreet/bind-utils:latest
    command: ["tail"]
    args: ["-f", "/dev/null"]
EOF
pod/client created

The slongstreet/bind-utils is an image containing various DNS commands, including the dig command. Then, we drop into the shell of that pod using the kubectl exec command:

$ kubectl exec -it client -- /bin/sh
$ 

Subsequently, we can use the dig command to query the IP addresses of the domain name of our headless service, web-server-headless:

$ dig +short +search web-server-headless
10.42.0.17
10.42.0.15
10.42.0.16

The command above specifies two additional options on the dig command. We pass the +short option to ensure that the dig command only prints the IP address. This makes it easy to use the result in our script later. Importantly, we must specify the +search option on the dig command to use the search list in our /etc/resolv.conf.

3.3. Broadcasting Requests

To broadcast a request, we iterate over the dig command’s output and send a request to each IP address associated with the headless service.

Concretely, we first store the result of the dig command into a variable, IP_ADDRESSES:

$ IP_ADDRESSES=$(dig +short +search web-server-headless)

The command above uses the command substitution to set the IP_ADDRESSES variable to the subshell‘s command output.

Then, we can loop over the variable and invoke a curl request to call the /clear-caches endpoint:

$ for IP_ADDRESS in $IP_ADDRESSES; do curl http://${IP_ADDRESS}:8080/clear-caches; done
Hostname: web-server-577f8f7f96-mxppg
Current Time: 2024-03-24 05:08:28
Last clear: 2024-03-24 05:08:21
Hostname: web-server-577f8f7f96-jvgdl
Current Time: 2024-03-24 05:08:28
Last clear: 2024-03-24 05:08:21
Hostname: web-server-577f8f7f96-4ls65
Current Time: 2024-03-24 05:08:28
Last clear: 2024-03-24 05:08:21

In the script snippet above, we use a for-loop to iterate over the content of the IP_ADDRESSES variable. For each IP address in the array, we send a PUT request to the destination on port 8080 using the curl command.

We can further enhance the script to parallelize the request so that requests do not have to wait for their preceding request to complete. We can refer to a separate article that covers various methods for parallelizing task executions that we can employ.

4. Conclusion

In this tutorial, we’ve first explained the challenges of broadcasting requests to an array of pods from a Kubernetes Deployment resource. Then, we’ve explored a solution that involves finding out the list of IP addresses of the pods and sending the requests to each of them. Finally, we’ve demonstrated the idea using a working example.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.