Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Browser testing is essential if you have a website or web applications that users interact with. Manual testing can be very helpful to an extent, but given the multiple browsers available, not to mention versions and operating system, testing everything manually becomes time-consuming and repetitive.

To help automate this process, Selenium is a popular choice for developers, as an open-source tool with a large and active community. What's more, we can further scale our automation testing by running on theLambdaTest cloud-based testing platform.

Read more through our step-by-step tutorial on how to set up Selenium tests with Java and run them on LambdaTest:

>> Automated Browser Testing With Selenium

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

1. Overview

Authentication is a fundamental aspect of designing any messaging system like Kafka. We can implement authentication with approaches like user-based credentials, SSL certificates, or token-based.

In this tutorial, we’ll learn how to implement an authentication mechanism called Simple Authentication and Socket Layer (SASL) in a Kafka service. We’ll also implement the client-side authentication using the mechanism provided by Spring Kafka.

2. Introduction to Kafka Authentication

Kafka supports various authentication and authorization mechanisms to secure communication over the network. It supports SSL, SASL, or delegated token. The authentication can be between the client to broker, broker to Zookeeper, or inter-brokers.

We can use any relevant approach depending on the system requirements and other infrastructural factors. SSL authentication uses the X.509 certificate to authenticate clients and brokers and provides either one-way or mutual authentication.

SASL authentication is a security framework that supports different authentication mechanisms:

  • SASL/GSSAPI – SASL/GSSAPI (Generic Security Services Application Program Interface) is a standard API that abstracts the security mechanisms through a standard API and can be easily integrated with an existing Kerberos service. SASL/GSSAPI authentication uses the key distribution center to provide the authentication over the network and is commonly used where existing infrastructure is available, like Active Directory or Kerberos server.
  • SASL/PLAIN – SASL/PLAIN uses the user-based credential for authentication, and it is mainly used in non-production environments as it is insecure over the network.
  • SASL/SCRAM – With SASL/SCRAM authentication, a salted challenge-response is created by hashing and adding a salt to the passwords, thus providing better security than the plain text mechanism. SCRAM supports different hashing algorithms like SHA-256, SHA-512 or SHA-1(weaker).
  • SASL/OAUTHBEARER – SASL/OAUTHBEARER uses an OAUTH 2.0 bearer token for authentication and is useful when we have an existing identity provider like Keycloak or OKTA.

We can also combine the SASL with SSL authentication to provide the transport layer encryption.

For the authorization in Kafka, we can use the built-in ACL-based or OAUTH/OIDC (OpenID Connect) or a custom authorizer.

In this tutorial, we’ll focus on the GSSAPI authentication implementation as it’s widely used and to keep it simple as well.

3. Implement Kafka Service With SASL/GSSAPI Authentication

Let’s imagine we need to build a Kafka service that supports GSSAPI authentication in a Docker environment.
For that, we can utilize a Kerberos runtime to provide the Ticket Granting Ticket (TGT) service and act as an authentication server.

3.1. Setup Kerberos

To implement the Kerberos service in a Docker environment, we’ll require a custom Kerberos setup.

First, let’s include a krb5.conf file to configure the realm BAELDUNG.COM with a few configs:

[libdefaults]
  default_realm = BAELDUNG.COM
  dns_lookup_realm = false
  dns_lookup_kdc = false
  forwardable = true
  rdns = true

[realms]
  BAELDUNG.COM = {
    kdc = kdc
    admin_server = kdc
  }

A realm is the logical or domain name for the Kafka service.

We’ll need to implement a script to initialize the Kerberos db using the kdb5_util, then create the principals and its associated keytab file for the Kafka, Zookeeper, and client application using the Kadmin.local command. Finally, we would use the krb5kdc and kadmind commands to start the Kerberos service.

Then, let’s implement the script kdc_setup.sh to add the principals, create the keytab files, and run the Kerberos service:

kadmin.local -q "addprinc -randkey kafka/[email protected]"
kadmin.local -q "addprinc -randkey zookeeper/[email protected]"
kadmin.local -q "addprinc -randkey [email protected]"

kadmin.local -q "ktadd -k /etc/krb5kdc/keytabs/kafka.keytab kafka/[email protected]"
kadmin.local -q "ktadd -k /etc/krb5kdc/keytabs/zookeeper.keytab zookeeper/[email protected]"
kadmin.local -q "ktadd -k /etc/krb5kdc/keytabs/client.keytab [email protected]"

krb5kdc
kadmind -nofork

The format for any principal is generally <service-name>/<host/domain>@REALM. The hostname section is optional, and the REALM is generally in capital.

We should also note that the principal should be correctly set. Otherwise, the authentication will fail due to a mismatch in the service name or fully qualified domain name.

Finally, let’s implement a Dockerfile to prepare the Kerberos environment:

FROM debian:bullseye

RUN apt-get update && \
    apt-get install -y krb5-kdc krb5-admin-server krb5-user && \
    rm -rf /var/lib/apt/lists/*
COPY config/krb5.conf /etc/krb5.conf
COPY setup_kdc.sh /setup_kdc.sh

RUN chmod +x /setup_kdc.sh
EXPOSE 88 749

CMD ["/setup_kdc.sh"]

The above Dockerfile uses the previously created krb5.conf and setup_kdc.sh files to initialize and run the Kerberos service.

We’ll also add a kadm5.acl file to give full permission to the admin principal:

*/[email protected] *

3.2. Configuration for the Kafka and Zookeeper

To configure the GSSAPI authentication in Kafka, we’ll use the JAAS (Java Authentication and Authorization Service) to specify how Kafka or the client should authenticate with Kerberos’s Key Distribution Center (KDC).

We’ll create the JAAS related-config in the Kafka server, Zookeeper, in separate files.

First, we’ll implement the zookeeper_jaas.conf file and set the previously created zookeeper.keytab file and principal parameters:

Server {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/keytabs/zookeeper.keytab"
    principal="zookeeper/[email protected]";
};

The principal is required to be the same as the Kerberos principal for the zookeeper. By setting the useKeyTab as true, we force the authentication to use the keytab file.

Then, let’s configure the Kafka server and client JAAS-related properties in the kafka_server_jaas.conf file:

KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/keytabs/kafka.keytab"
    principal="kafka/[email protected]"
    serviceName="kafka";
};

Client {
  com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/keytabs/client.keytab"
    principal="[email protected]"
    serviceName="kafka";
};

3.3. Integrate Kafka with Zookeeper and Kerberos

The Kafka, Zookeeper, and custom Kerberos service can be easily integrated with the help of Docker services.

First, we’ll implement the custom Kerberos service using the earlier Dockerfile:

services:
  kdc:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - ./config:/etc/krb5kdc
      - ./keytabs:/etc/krb5kdc/keytabs
      - ./config/krb5.conf:/etc/krb5.conf
    ports:
      - "88:88/udp"

The above service will be available on a typical UDP 88 port for both the internal and the host environment.

Then, let’s setup the Zookeeper service using the confluentinc:cp-zookeeper base image:

zookeeper:
  image: confluentinc/cp-zookeeper:latest
  container_name: zookeeper
  environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000
    KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf"
  volumes:
    - ./config/zookeeper_jaas.conf:/etc/kafka/zookeeper_jaas.conf
    - ./keytabs:/etc/kafka/keytabs
    - ./config/krb5.conf:/etc/krb5.conf
  ports:
    - "2181:2181"

The above Zookeeper service is configured with the zookeeper_jaas.conf for the GSSAPI authentication as well.

Finally, we’ll set the Kafka service with the GSSAPI-related environment properties:

kafka:
  image: confluentinc/cp-kafka:latest
  container_name: kafka
  environment:
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: GSSAPI
    KAFKA_SASL_ENABLED_MECHANISMS: GSSAPI
    KAFKA_LISTENERS: SASL_PLAINTEXT://:9092
    KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092
    KAFKA_INTER_BROKER_LISTENER_NAME: SASL_PLAINTEXT
    KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  volumes:
    - ./config/kafka_server_jaas.conf:/etc/kafka/kafka_server_jaas.conf
    - ./keytabs:/etc/kafka/keytabs
    - ./config/krb5.conf:/etc/krb5.conf
  depends_on:
    - zookeeper
    - kdc
  ports:
    - 9092:9092

In the above Kafka service, we’ve enabled the GSSAPI authentication on both the inter-broker and client to Kafka.
The Kafka service will use the earlier created kafka_server_jaas.conf file for GSSAPI configurations like the principal and the keytab file.

We should note that the KAFKA_ADVERTISED_LISTENERS property is the endpoint that the Kafka client will listen to.

Now, we’ll run the entire Docker setup using the docker compose command:

$ docker compose up --build
kafka      | [2025-02-03 18:09:10,147] INFO Successfully authenticated client: authenticationID=kafka/[email protected]; authorizationID=kafka/[email protected]. (org.apache.kafka.common.security.authenticator.SaslServerCallbackHandler)
kafka      | [2025-02-03 18:09:10,148] INFO [RequestSendThread controllerId=1001] Controller 1001 connected to localhost:9092 (id: 1001 rack: null) for sending state change requests (kafka.controller.RequestSendThread)

From the above logs, we confirm that the Kafka, Zookeeper, and Kerberos services are all integrated without errors.

4. Implement the Kafka Client With Spring

We’ll implement the Kafka listener application using the Spring Kafka implementation.

4.1. Maven Dependencies

First, we’ll include the spring-kafka dependency:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.1.2</version>
</dependency>

4.2. Implement the Kafka Listener

We’ll use Spring Kafka’s KafkaListener and ConsumerRecord classes to implement the listener.

Let’s implement the Kafka listener with the @KafkaListener annotation and add the required topic:

@KafkaListener(topics = test-topic)
public void receive(ConsumerRecord<String, String> consumerRecord) {
    log.info("Received payload: '{}'", consumerRecord.toString());
    messages.add(consumerRecord.value());
}

Also, we’ll configure the Spring listener-related configurations in the application-sasl.yml file:

spring:
  kafka:
    bootstrap-servers: localhost:9092
    consumer:
      group-id: test
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

Now, let’s run the Spring application and verify the setup:

kafka | [2025-02-01 03:08:01,532] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Failed authentication with /172.21.0.1 (channelId=172.21.0.4:9092-172.21.0.1:59840-16) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

The above logs confirm that the client application cannot authenticate to the Kafka server as expected.

To fix this issue, we’ll also need to include the Spring Kafka JAAS config in the application.

5. Configure the Kafka Client With JAAS Config

We’ll use the spring.kafka.properties configurations to provide the SASL/GSSAPI settings.

Now, we’ll include a few additional configurations related to the client’s principal, keytab file, and sasl.mechanism as GSSAPI:

spring:
  kafka:
    bootstrap-servers: localhost:9092
    properties:
      sasl.mechanism: GSSAPI
      sasl.jaas.config: >
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        storeKey=true
        keyTab="./src/test/resources/sasl/keytabs/client.keytab"
        principal="[email protected]"
        serviceName="kafka";

We should note that the above serviceName config should exactly match the Kafka principal’s serviceName.

Let’s again verify the Kafka consumer application.

6. Testing the Kafka Listener in the Application

To quickly verify the listener, we’ll use Kafka’s provided utility program, kafka-console-producer.sh, to send messages to a topic.

We’ll run the below command to send a message to the topic:

$ kafka-console-producer.sh --broker-list localhost:9092 \
  --topic test-topic \
  --producer-property security.protocol=SASL_PLAINTEXT \
  --producer-property sasl.mechanism=GSSAPI \
  --producer-property sasl.kerberos.service.name=kafka \
  --producer-property sasl.jaas.config="com.sun.security.auth.module.Krb5LoginModule required 
    useKeyTab=true keyTab=\"/<path>/client.keytab\" 
    storeKey=true principal=\"[email protected]\";"
> hello

In the above command, we’re passing similar auth-related configs like the security.protocol, sasl.mechanism, and sasl.jaas.config with the client.keytab file.

Now, let’s verify the listener logs for the received message:

08:52:13.663 INFO  c.b.s.KafkaConsumer - Received payload: 'ConsumerRecord(topic = test-topic, .... key = null, value = hello)'

We should note that there might be a few more configurations required in any production-ready application, like configuring SSL certificates or DNS.

7. Conclusion

In this article, we’ve learned how to setup a Kafka service and enable the SASL/GSSAPI authentication using a custom Kerberos setup in a docker environment.

We’ve also implemented the client-side listener application and configured the GSSAPI authentication using the JAAS config. Finally, we tested the entire setup by sending a message and receiving that message in the listener.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)