Let's get started with a Microservice Architecture with Spring Cloud:
subscribe() vs assign() Methods of KafkaConsumer
Last updated: February 14, 2026
1. Overview
The client library provided by Apache Kafka allows developers to produce and consume messages using a low-level Java API (Application Programming Interface), as well as other programming languages. The KafkaConsumer class in this API has two methods for reading messages: subscribe() and assign().
In this tutorial, we’ll discuss the difference between the subscribe() and assign() methods in the Kafka Java Client API. We’ll see that the main difference between them is automatic and manual partition assignments. The version of Kafka we’ll be using in the examples is 4.1.1.
2. Automatic Partition Assignment Using subscribe()
We’ll discuss the subscribe() method of the KafkaConsumer class in this section.
2.1. KafkaConsumer.subscribe()
The subscribe() method of the KafkaConsumer class is used to subscribe to one or more topics. If a consumer is part of a consumer group, then the Kafka Cluster automatically allocates partition assignments to consumers. Therefore, it provides dynamic scaling and load balancing when new consumers join or existing consumers leave. Consequently, it’s simpler to manage partition assignments.
Here is the definition of the subscribe() method:
public void subscribe(Collection<String> topics)
It subscribes to the list of topics passed to it. There are other overloads of it. However, the main idea remains the same: to obtain dynamically assigned partitions.
2.2. An Example
Let’s see an example that uses the subscribe() method. For simplicity, we’ll use a single ungrouped consumer, i.e., a consumer not belonging to a consumer group. Here is the code snippet in Java:
// Create Kafka Consumer
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
// Subscribe Consumer to Our topics
String topics = "test-topic";
consumer.subscribe(List.of(topics));
Firstly, we create a Kafka consumer and then subscribe to a single topic named test-topic.
Then, we fetch the incoming samples of the topic in an infinite while loop:
logger.info("Waiting for messages...");
// Poll the data
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
logger.info("Value: " + record.value() + " -- Partition: " + record.partition());
}
}
We use the poll() method of the KafkaConsumer class to fetch the received samples. It returns the samples immediately if there are any. Otherwise, it waits for the timeout period, 1000 milliseconds in our example. If the timeout expires, the poll() method returns an empty record. We print the values and partitions of the received samples by iterating over the records.
2.3. Testing the Example
Now, let’s test the consumer’s behavior. Firstly, we need to create the topic test-topic using the kafka-topics.sh script:
$ kafka-topics.sh --bootstrap-server localhost:9092 --topic test-topic --create --partitions 3
Created topic test-topic.
We specified the number of partitions to be 3 explicitly using the –partitions option. Indeed, it’s 3 by default. Now, let’s start a producer using the kafka-console-producer.sh script:
$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic --producer-property partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner
>
The arrowhead symbol, >, shows that we’re ready to send messages to test-topic. We use the RoundRobinPartitioner strategy to make the producer write topics in a round-robin fashion using the –producer-property option. Otherwise, the key of the written topics is null, and topics are written to only one of the randomly-chosen partitions.
Now, let’s send six messages:
$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic --producer-property partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner
>Message1
>Message2
>Message3
>Message4
>Message5
>Message6
>
Afterwards, let’s check the output of the consumer application:
Waiting for messages...
Value: Message1 -- Partition: 0
Value: Message2 -- Partition: 1
Value: Message3 -- Partition: 2
Value: Message4 -- Partition: 0
Value: Message5 -- Partition: 1
Value: Message6 -- Partition: 2
As evident from the output, we received all messages in all partitions since the consumer isn’t part of a specific consumer group. There are three partitions, as expected. Message1 and Message4 are located in the first partition, Partition 0. Similarly, Message2 and Message5 are in the second partition, Partition 1, whereas Message3 and Message6 are in the last partition, Partition 2.
3. Manual Partition Assignment Using assign()
We’ll discuss the assign() method of the KafkaConsumer class in this section.
3.1. KafkaConsumer.assign()
We use the assign() method of the KafkaConsumer class for a manual assignment of partitions to consumers. Therefore, it provides full control over partitions. Since there isn’t any automatic rebalancing when new consumers join, or existing consumers leave, it may provide a more stable consumption because of reading from the same partition. However, there is no auto-scaling or fault tolerance.
Here is the definition of the assign() method:
public void assign(Collection<TopicPartition> partitions)
It gets a list of partitions as input and assigns them to the consumer.
3.2. An Example
Let’s see an example that uses the assign() method. Here is the code snippet in Java:
// Create Kafka Consumer
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
// Subscribe Consumer to Our topics
String topics = "test-topic";
consumer.assign(Arrays.asList(new TopicPartition(topics, 1)));
After creating a Kafka consumer, we assign the second partition of test-topic to this consumer. As we’ve already seen in the example of the previous section, partition numbering starts from 0. Therefore, the second argument, while calling the constructor of TopicPartition, i.e., 1, corresponds to the second partition.
We use the same while loop in the previous section to fetch the received topic samples.
3.3. Testing the Example
To test the application, let’s start a producer using the kafka-console-producer.sh script and write six messages:
$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic --producer-property partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner
>Message11
>Message12
>Message13
>Message14
>Message15
>Message16
>
Having written the messages, let’s check the output of the consumer application:
Waiting for messages...
Value: Message12 -- Partition: 1
Value: Message15 -- Partition: 1
Now, instead of receiving all the messages, we read only the messages in Partition 1, Message12 and Message15, as expected.
4. Conclusion
In this article, we discussed the difference between the subscribe() and assign() methods in the Kafka Java Client API. Firstly, we saw the subscribe() method, which gets a list of topic names. The consumer subscribes to the topics in the list.
Then, we discussed the assign() method, which gets a list of partitions. The consumer subscribes to partitions, which consist of topic names and corresponding partition numbers.

















