Course – Black Friday 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Course – Black Friday 2025 – NPI (cat=Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

1. Overview

In this article, we will explore the challenges and solutions surrounding message ordering in Apache Kafka. Processing messages in the correct order is crucial for maintaining data integrity and consistency in distributed systems. While Kafka offers mechanisms to maintain message order, achieving this in a distributed environment presents its own set of complexities.

2. Ordering Within a Partition and Its Challenges

Kafka maintains order within a single partition by assigning a unique offset to each message. This guarantees sequential message appending within that partition. However, when we scale up and use multiple partitions, maintaining a global order becomes complex. Different partitions receive messages at varying rates, complicating strict ordering across them.

2.1. Producer and Consumer Timing

Let’s talk about how Kafka handles the order of messages. There’s a bit of a difference between the order in which a producer sends messages and how the consumer receives them. By sticking to just one partition, we process messages in the order they arrive at the broker. However, this order might not match the sequence in which we originally sent them. This mix-up can happen because of things like network latency or if we are resending a message. To keep things in line, we can implement producers with acknowledgments and retries. This way, we make sure that messages not only reach Kafka but also in the right order.

2.2. Challenges with Multiple Partitions

This distribution across partitions, while beneficial for scalability and fault tolerance, introduces the complexity of achieving global message ordering. For instance, we’re sending out two messages, M1 and M2, in that order. Kafka gets them just like we sent them, but it puts them in different partitions. Here’s the catch, just because M1 was sent first doesn’t mean it’ll be processed before M2. This can be challenging in scenarios where the order of processing is crucial, such as financial transactions.

2.3. Single Partition Message Ordering

We create topics with the name ‘single_partition_topic’, which has one partition, and ‘multi_partition_topic’, which has 5 partitions. Below is an example of a topic with a single partition, where the producer is sending a message to the topic:

Properties producerProperties = new Properties();
producerProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, KAFKA_CONTAINER.getBootstrapServers());
producerProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
producerProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JacksonSerializer.class.getName());
producer = new KafkaProducer<>(producerProperties);
for (long sequenceNumber = 1; sequenceNumber <= 10; sequenceNumber++) {
    UserEvent userEvent = new UserEvent(UUID.randomUUID().toString());
    userEvent.setGlobalSequenceNumber(sequenceNumber);
    userEvent.setEventNanoTime(System.nanoTime());
    ProducerRecord<Long, UserEvent> producerRecord = new ProducerRecord<>(Config.SINGLE_PARTITION_TOPIC, userEvent);
    Future<RecordMetadata> future = producer.send(producerRecord);
    sentUserEventList.add(userEvent);
    RecordMetadata metadata = future.get();
    logger.info("User Event ID: " + userEvent.getUserEventId() + ", Partition : " + metadata.partition());
}

UserEvent is a POJO class that implements the Comparable interface, helping in sorting the message class by globalSequenceNumber (external sequence number). Since the producer is sending POJO message objects, we implemented a custom Jackson Serializer and Deserializer.

Partition 0 receives all user events, and the event IDs appear in the following sequence:

841e593a-bca0-4dd7-9f32-35792ffc522e
9ef7b0c0-6272-4f9a-940d-37ef93c59646
0b09faef-2939-46f9-9c0a-637192e242c5
4158457a-73cc-4e65-957a-bf3f647d490a
fcf531b7-c427-4e80-90fd-b0a10bc096ca
23ed595c-2477-4475-a4f4-62f6cbb05c41
3a36fb33-0850-424c-81b1-dafe4dc3bb18
10bca2be-3f0e-40ef-bafc-eaf055b4ee26
d48dcd66-799c-4645-a977-fa68414ca7c9
7a70bfde-f659-4c34-ba75-9b43a5045d39

In Kafka, each consumer group operates as a distinct entity. If two consumers belong to different consumer groups, they both will receive all the messages on the topic. This is because Kafka treats each consumer group as a separate subscriber.

If two consumers belong to the same consumer group and subscribe to a topic with multiple partitions, Kafka will ensure that each consumer reads from a unique set of partitions. This is to allow concurrent processing of messages.

Kafka ensures that within a consumer group, no two consumers read the same message, thus each message is processed only once per group.

The code below is for a consumer consuming messages from the same topic:

Properties consumerProperties = new Properties();
consumerProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KAFKA_CONTAINER.getBootstrapServers());
consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JacksonDeserializer.class.getName());
consumerProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProperties.put(Config.CONSUMER_VALUE_DESERIALIZER_SERIALIZED_CLASS, UserEvent.class);
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group");
consumer = new KafkaConsumer<>(consumerProperties);
consumer.subscribe(Collections.singletonList(Config.SINGLE_PARTITION_TOPIC));
ConsumerRecords<Long, UserEvent> records = consumer.poll(TIMEOUT_WAIT_FOR_MESSAGES);
records.forEach(record -> {
    UserEvent userEvent = record.value();
    receivedUserEventList.add(userEvent);
    logger.info("User Event ID: " + userEvent.getUserEventId());
});

In this case, we get the output which shows the consumer consuming messages in the same order, below are the sequential event IDs from the output:

841e593a-bca0-4dd7-9f32-35792ffc522e
9ef7b0c0-6272-4f9a-940d-37ef93c59646
0b09faef-2939-46f9-9c0a-637192e242c5
4158457a-73cc-4e65-957a-bf3f647d490a
fcf531b7-c427-4e80-90fd-b0a10bc096ca
23ed595c-2477-4475-a4f4-62f6cbb05c41
3a36fb33-0850-424c-81b1-dafe4dc3bb18
10bca2be-3f0e-40ef-bafc-eaf055b4ee26
d48dcd66-799c-4645-a977-fa68414ca7c9
7a70bfde-f659-4c34-ba75-9b43a5045d39

2.4. Multiple Partition Message Ordering

For a topic with multiple partitions, the consumer and producer configurations are the same. The only difference is the topic and partitions where messages go, the producer sends messages to the topic ‘multi_partition_topic’:

Future<RecordMetadata> future = producer.send(new ProducerRecord<>(Config.MULTI_PARTITION_TOPIC, sequenceNumber, userEvent));
sentUserEventList.add(userEvent);
RecordMetadata metadata = future.get();
logger.info("User Event ID: " + userEvent.getUserEventId() + ", Partition : " + metadata.partition());

The consumer consumes messages from the same topic:

consumer.subscribe(Collections.singletonList(Config.MULTI_PARTITION_TOPIC));
ConsumerRecords<Long, UserEvent> records = consumer.poll(TIMEOUT_WAIT_FOR_MESSAGES);
records.forEach(record -> {
    UserEvent userEvent = record.value();
    receivedUserEventList.add(userEvent);
    logger.info("User Event ID: " + userEvent.getUserEventId());
});

The producer output lists event IDs alongside their respective partitions as below:

939c1760-140e-4d0c-baa6-3b1dd9833a7d, 0
47fdbe4b-e8c9-4b30-8efd-b9e97294bb74, 4
4566a4ec-cae9-4991-a8a2-d7c5a1b3864f, 4
4b061609-aae8-415f-94d7-ae20d4ef1ca9, 3
eb830eb9-e5e9-498f-8618-fb5d9fc519e4, 2
9f2a048f-eec1-4c68-bc33-c307eec7cace, 1
c300f25f-c85f-413c-836e-b9dcfbb444c1, 0
c82efad1-6287-42c6-8309-ae1d60e13f5e, 4
461380eb-4dd6-455c-9c92-ae58b0913954, 4
43bbe38a-5c9e-452b-be43-ebb26d58e782, 3

For the consumer, the output would show that the consumer is not consuming messages in the same order. The event IDs from the output are below:

939c1760-140e-4d0c-baa6-3b1dd9833a7d
47fdbe4b-e8c9-4b30-8efd-b9e97294bb74
4566a4ec-cae9-4991-a8a2-d7c5a1b3864f
c82efad1-6287-42c6-8309-ae1d60e13f5e
461380eb-4dd6-455c-9c92-ae58b0913954
eb830eb9-e5e9-498f-8618-fb5d9fc519e4
4b061609-aae8-415f-94d7-ae20d4ef1ca9
43bbe38a-5c9e-452b-be43-ebb26d58e782
c300f25f-c85f-413c-836e-b9dcfbb444c1
9f2a048f-eec1-4c68-bc33-c307eec7cace

3. Message Ordering Strategies

3.1. Using a Single Partition

We could use a single partition in Kafka, as demonstrated in our earlier example with ‘single_partition_topic’, which ensures the ordering of messages. However, this approach has its trade-offs:

  • Throughput Constraint: Imagine we’re at a busy pizza shop. If we’ve only got one chef (the producer) and one waiter (the consumer) working on one table (the partition), they can only serve so many pizzas before things start to back up. In the world of Kafka, when we’re dealing with a ton of messages, sticking to a single partition is like that one-table scenario. A single partition becomes a bottleneck in high-volume scenarios and the rate of message processing is limited since only one producer and one consumer can operate on a single partition at a time.
  • Reduced Parallelism: In the case of the example above, if we have multiple chefs (producers) and waiters (consumers) working on multiple tables (partitions), then the number of orders fulfilled increases. Kafka’s strength lies in parallel processing across multiple partitions. With just one partition, this advantage is lost, leading to sequential processing and further restricting message flow

In essence, while a single partition guarantees order, it does so at the expense of reduced throughput.

3.2. External Sequencing with Time Window Buffering

In this approach, the producer tags each message with a global sequence number. Multiple consumer instances consume messages concurrently from different partitions and use these sequence numbers to reorder messages, ensuring global order.

In a real-world scenario with multiple producers, we will manage a global sequence by a shared resource that’s accessible across all producer processes, such as a database sequence or a distributed counter. This ensures that the sequence numbers are unique and ordered across all messages, irrespective of which producer sends them:

for (long sequenceNumber = 1; sequenceNumber <= 10 ; sequenceNumber++) {
    UserEvent userEvent = new UserEvent(UUID.randomUUID().toString());
    userEvent.setEventNanoTime(System.nanoTime());
    userEvent.setGlobalSequenceNumber(sequenceNumber);
    Future<RecordMetadata> future = producer.send(new ProducerRecord<>(Config.MULTI_PARTITION_TOPIC, sequenceNumber, userEvent));
    sentUserEventList.add(userEvent);
    RecordMetadata metadata = future.get();
    logger.info("User Event ID: " + userEvent.getUserEventId() + ", Partition : " + metadata.partition());
}

On the consumer side, we group the messages into time windows and then process them sequentially. Messages that arrive within a specific time frame we batch it together, and once the window elapses, we process the batch. This ensures that messages within that time frame are processed in order, even if they arrive at different times within the window. The consumer buffers messages and reorders them based on sequence numbers before processing. We need to ensure that messages are processed in the correct order, and for that, the consumer should have a buffer period where it polls for messages multiple times before processing the buffered messages and this buffer period is long enough to cope with potential message ordering issues:

consumer.subscribe(Collections.singletonList(Config.MULTI_PARTITION_TOPIC));
List<UserEvent> buffer = new ArrayList<>();
long lastProcessedTime = System.nanoTime();
ConsumerRecords<Long, UserEvent> records = consumer.poll(TIMEOUT_WAIT_FOR_MESSAGES);
records.forEach(record -> {
    buffer.add(record.value());
});
while (!buffer.isEmpty()) {
    if (System.nanoTime() - lastProcessedTime > BUFFER_PERIOD_NS) {
        processBuffer(buffer, receivedUserEventList);
        lastProcessedTime = System.nanoTime();
    }
    records = consumer.poll(TIMEOUT_WAIT_FOR_MESSAGES);
    records.forEach(record -> {
        buffer.add(record.value());
    });
}

void processBuffer(List buffer, List receivedUserEventList) {
    Collections.sort(buffer);
    buffer.forEach(userEvent -> {
        receivedUserEventList.add(userEvent);
        logger.info("Processing message with Global Sequence number: " + userEvent.getGlobalSequenceNumber() + ", User Event Id: " + userEvent.getUserEventId());
    });
    buffer.clear();
}

Each event ID appears in the output alongside its corresponding partition, as shown below:

d6ef910f-2e65-410d-8b86-fa0fc69f2333, 0
4d6bfe60-7aad-4d1b-a536-cc735f649e1a, 4
9b68dcfe-a6c8-4cca-874d-cfdda6a93a8f, 4
84bd88f5-9609-4342-a7e5-d124679fa55a, 3
55c00440-84e0-4234-b8df-d474536e9357, 2
8fee6cac-7b8f-4da0-a317-ad38cc531a68, 1
d04c1268-25c1-41c8-9690-fec56397225d, 0
11ba8121-5809-4abf-9d9c-aa180330ac27, 4
8e00173c-b8e1-4cf7-ae8c-8a9e28cfa6b2, 4
e1acd392-db07-4325-8966-0f7c7a48e3d3, 3

Consumer output with global sequence numbers and event IDs:

1, d6ef910f-2e65-410d-8b86-fa0fc69f2333
2, 4d6bfe60-7aad-4d1b-a536-cc735f649e1a
3, 9b68dcfe-a6c8-4cca-874d-cfdda6a93a8f
4, 84bd88f5-9609-4342-a7e5-d124679fa55a
5, 55c00440-84e0-4234-b8df-d474536e9357
6, 8fee6cac-7b8f-4da0-a317-ad38cc531a68
7, d04c1268-25c1-41c8-9690-fec56397225d
8, 11ba8121-5809-4abf-9d9c-aa180330ac27
9, 8e00173c-b8e1-4cf7-ae8c-8a9e28cfa6b2
10, e1acd392-db07-4325-8966-0f7c7a48e3d3

3.3. Considerations for External Sequencing with Buffering

In this approach, each consumer instance buffers messages and processes them in order based on their sequence numbers. However, there are a few considerations:

  • Buffer Size: The buffer’s size can increase depending on the volume of incoming messages. In implementations that prioritize strict ordering by sequence numbers, we might see significant buffer growth, especially if there are delays in message delivery. For instance, if we process 100 messages per minute but suddenly receive 200 due to a delay, the buffer will grow unexpectedly. So we must manage the buffer size effectively and have strategies ready in case it exceeds the anticipated limit
  • Latency: When we buffer messages, we’re essentially making them wait a bit before processing (introducing latency). On one hand, it helps us keep things orderly; on the other, it slows down the whole process. It’s all about finding the right balance between maintaining order and minimizing latency
  • Failures: If consumers fail, we might lose the buffered messages, to prevent this, we might need to regularly save the state of our buffer
  • Late Messages: Messages arriving post-processing of their window will be out of sequence. Depending on the use case, we might need strategies to handle or discard such messages
  • State Management: If processing involves stateful operations, we’ll need mechanisms to manage and persist state across windows.
  • Resource Utilization: Keeping a lot of messages in the buffer requires memory. We need to ensure that we have enough resources to handle this, especially if messages are staying in the buffer for longer periods

3.4. Idempotent Producers

Kafka’s idempotent producer feature aims to deliver messages precisely once, thus preventing any duplicates. This is crucial in scenarios where a producer might retry sending a message due to network errors or other transient failures. While the primary goal of idempotency is to prevent message duplication, it indirectly influences message ordering. Kafka achieves idempotency using two things a Producer ID (PID) and a sequence number which acts as the idempotency key and is unique within the context of a specific partition.

  • Sequence Numbers: Kafka assigns sequence numbers to each message sent by the producer. These sequence numbers are unique per partition, ensuring that messages, when sent by the producer in a specific sequence, are written in that same order within a specific partition upon being received by Kafka. Sequence numbers guarantee order within a single partition. However, when producing messages to multiple partitions, there’s no global order guarantee across partitions. For example, if a producer sends messages M1, M2, and M3 to partitions P1, P2, and P3, respectively, each message receives a unique sequence number within its partition. However, it does not guarantee the relative consumption order across these partitions
  • Producer ID (PID): When enabling idempotency, the broker assigns a unique Producer ID (PID) to each producer. This PID, combined with the sequence number, enables Kafka to identify and discard any duplicate messages that result from producer retries

Kafka guarantees message ordering by writing messages to partitions in the order they’re produced, thanks to sequence numbers, and prevents duplicates using the PID and the idempotency feature. To enable the idempotent producer, we need to set the “enable.idempotence” property to true in the producer’s configuration:

props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");

4. Key Configurations for Producer and Consumer

There are key configurations for Kafka producers and consumers that can influence message ordering and throughput.

4.1. Producer Configurations

  • MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION: If we are sending a bunch of messages, then this setting in Kafka helps in deciding how many messages we can send without waiting for a ‘read’ receipt. If we set it higher than 1 without turning on idempotence, we might end up disturbing the order of our messages if we have to resend them. But, if we turn on idempotence, Kafka keeps messages in order, even if we send a bunch at once. For super strict order, like ensuring every message is read before the next one is sent, we should set this value to 1. If we want to prioritize speed and over perfect order, then we can set up to 5, but this can potentially introduce ordering issues.
  • BATCH_SIZE_CONFIG and LINGER_MS_CONFIG: Kafka controls the default batch size in bytes, aiming to group records for the same partition into fewer requests for better performance. If we set this limit too low, we’ll be sending out lots of small groups, which can slow us down. But if we set it too high, it might not be the best use of our memory. Kafka can wait a bit before sending a group if it’s not full yet. This wait time is controlled by LINGER_MS_CONFIG. If more messages come in quickly enough to fill up our set limit, they go immediately, but if not, Kafka doesn’t keep waiting – it sends whatever we have when the time’s up. It’s like balancing speed and efficiency, making sure we’re sending just enough messages at a time without unnecessary delays.
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
props.put(ProducerConfig.BATCH_SIZE_CONFIG, "16384");
props.put(ProducerConfig.LINGER_MS_CONFIG, "5");

4.2. Consumer Configurations

  • MAX_POLL_RECORDS_CONFIG: It’s the limit on how many records our Kafka consumer grabs each time it asks for data. If we set this number high, we can chow down on a lot of data at once, boosting our throughput. But there’s a catch – the more we take, the trickier it might be to keep everything in order. So, we need to find that sweet spot where we’re efficient but not overwhelmed
  • FETCH_MIN_BYTES_CONFIG: If we set this number high, Kafka waits until it has enough data to meet our minimum bytes before sending it over. This can mean fewer trips (or fetches), which is great for efficiency. But if we’re in a hurry and want our data fast, we might set this number lower, so Kafka sends us whatever it has more quickly. For instance, if our consumer application is resource-intensive or needs to maintain strict message order, especially with multi-threading, a smaller batch might be beneficial
  • FETCH_MAX_WAIT_MS_CONFIG: This will decide how long our consumer waits for Kafka to gather enough data to meet our FETCH_MIN_BYTES_CONFIG. If we set this time high, our consumer is willing to wait longer, potentially getting more data in one go. But if we’re in a rush, we set this lower, so our consumer gets data faster, even if it’s not as much. It’s a balancing act between waiting for a bigger haul and getting things moving quickly
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "500");
props.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, "1");
props.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "500");

5. Conclusion

In this article, we have delved into the intricacies of message ordering in Kafka. We have explored the challenges and presented strategies to address them. Whether it’s through single partitions, external sequencing with time window buffering, or idempotent producers, Kafka offers custom solutions to meet the needs of message ordering.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Course – Black Friday 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

Course – Black Friday 2025 – NPI (All)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

eBook Jackson – NPI EA – 3 (cat = Jackson)