Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: April 24, 2025
Sticky sessions, also called session affinity, create a consistent connection between a user and a specific backend pod. This routing method helps preserve session data or cached content by sending all of a user’s requests to the same pod. It plays an important role in applications that keep user-specific information in memory, such as session states or temporary data. Redirecting requests to other pods can cause session loss and unpredictable behavior.
In this tutorial, we’ll explore how sticky sessions work in Kubernetes, why they are important for stateful applications, and the different ways they can be implemented.
Sticky sessions ensure that a client’s requests consistently reach the same pod as long as that pod remains available. This setup is crucial for applications that store session data in memory, such as login tokens, shopping cart contents, or Cross-Site Request Forgery (CSRF) tokens, as it prevents data loss when traffic shifts between pods.
By default, Kubernetes Services use round-robin load balancing, distributing requests evenly across all available pods. While efficient for stateless applications, this behavior can disrupt session continuity in stateful workloads. Without sticky sessions, users might experience broken sessions or inconsistent behavior.
Sticky sessions solve this by binding a client’s traffic to a single pod. Once the first connection is established, subsequent requests from the same client continue to reach that same pod, as long as it’s still running. This routing behavior improves consistency and reduces reliance on external session stores.
In the next section, we take a look at configuring sticky sessions with Services, Ingress controllers, and StatefulSets.
Sticky session implementation depends on how the application is exposed. Kubernetes Services support session affinity, Ingress controllers can be configured with sticky session annotations, and StatefulSets maintain pod identity and storage.
Kubernetes Services support basic sticky sessions using the sessionAffinity field. This feature routes incoming traffic based on the client’s IP address:
apiVersion: v1
kind: Service
metadata:
name: ops-app-service
spec:
selector:
app: ops-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
sessionAffinity: ClientIP
Setting sessionAffinity to ClientIP ensures that all requests from the same client IP are consistently routed to the same pod. As long as the pod stays healthy and the client’s IP doesn’t change, the connection remains sticky.
However, this method isn’t foolproof. In environments where IP addresses often change, such as mobile networks or NAT-based traffic, this approach may not hold up reliably.
Ingress controllers route HTTP and HTTPS traffic into the cluster and can be configured to maintain session stickiness using annotations. For example, the NGINX Ingress Controller supports cookie-based session affinity, which allows client sessions to remain bound to the same backend pod regardless of IP address changes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ops-app-ingress
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "3600"
nginx.ingress.kubernetes.io/session-cookie-max-age: "3600"
nginx.ingress.kubernetes.io/session-cookie-path: "/"
spec:
rules:
- host: opsapp.cloud.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ops-app-service
port:
number: 80
Using nginx.ingress.kubernetes.io/affinity: “cookie” enables sticky sessions by injecting a cookie into the client’s request. The Ingress controller uses this cookie to route subsequent traffic to the same pod, even when client IPs vary.
Some workloads require more than simple load balancing, they need stable network identities and persistent storage across pod restarts. This is where StatefulSets are especially useful. They are well-suited for applications that manage in-memory session data or write to local storage directly.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ops-app
spec:
serviceName: "ops-app-service"
replicas: 3
selector:
matchLabels:
app: ops-app
template:
metadata:
labels:
app: ops-app
spec:
containers:
- name: ops-app-container
image: ops-app-image
ports:
- containerPort: 8080
volumeMounts:
- name: session-storage
mountPath: /data
volumeClaimTemplates:
- metadata:
name: session-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
With StatefulSets, each pod gets a consistent name (like ops-app-0, ops-app-1) and its own persistent volume. This setup keeps session data mapped to the same pod, even if that pod restarts or gets rescheduled. As a result, applications can preserve session continuity without depending on an external store.
So far, the focus has been on how to implement sticky sessions. But how do these approaches hold up in practice? Picking the right one isn’t just about making it work, it’s about choosing a session strategy that fits the network flow, application behavior, and operational demands:
| Method | Pros | Cons |
|---|---|---|
| sessionAffinity: ClientIP | Dead-simple to enable; fits internal traffic where IPs are stable | Breaks under NAT or proxies; struggles with mobile and dynamic IPs |
| Ingress Controller Sticky Sessions | Cookie-based; resilient to IP changes; integrates cleanly with web clients | Depends on proper Ingress setup; needs annotations managed carefully |
| StatefulSets | Preserves pod identity and storage; ideal for apps with session persistence | Consumes more resources and complex to manage and scale |
Although ClientIP is simple to configure, it tends to break in cloud-native environments where IP addresses change frequently. Cookie-based session affinity via Ingress handles this churn more effectively and is a better fit for browser-facing applications. For workloads that require in-memory session persistence, such as databases or cache-heavy services, StatefulSets provide strong consistency, though they come with added operational complexity.
Sticky sessions can improve user experience in stateful applications, but they also come with trade-offs that affect scalability and reliability. It’s important to monitor their impact and adopt supporting strategies:
Watch for uneven traffic distribution: Sticky sessions can cause some pods to become hotspots, especially when session durations vary. Monitor pod utilization and configure autoscaling to handle imbalances effectively.
Use metrics to guide routing adjustments: Tools like Prometheus and Grafana can help identify uneven traffic patterns caused by sticky routing. Load balancer configurations may need fine-tuning to ensure better distribution.
Consider offloading session data: When scalability is a priority, offloading session state to external stores like Redis or Memcached can reduce dependency on sticky sessions and enable more flexible scaling across pods.
In this article, we covered how sticky sessions work in Kubernetes and how to implement them using Services, Ingress controllers, and StatefulSets. Each approach comes with its own strengths, depending on how the application is structured and exposed. Knowing when and where to apply these strategies can go a long way in improving session consistency, system resilience, and the overall user experience.