1. Overview
In this tutorial, we’ll discuss handling Kafka messages in batches with Spring Kafka library’s @KafkaListener annotation. Kafka broker is a middleware that helps persist messages from source systems. The target systems are configured to poll the Kafka topics/queues periodically and then read the messages from them.
This prevents message loss when the target systems or services are down. When the target services recover, they continue accepting unprocessed messages. Therefore, this type of architecture helps increase the durability of the message and hence the system’s fault tolerance.
2. Why Handle Messages in Batches?
It’s common for multiple sources or event producers to send messages simultaneously to the same Kafka queue or topic. As a result, huge volumes of messages can accumulate in them. If target services or consumers receive these huge volumes of messages in one session, they may fail to process them efficiently.
This can have a cascading effect, which can lead to bottlenecks. Eventually, this affects all the downstream processes, dependent on the messages. Therefore, consumers or message listeners should limit the number of messages they can handle at one point in time.
To run in the batch mode, we must configure the right batch size by considering the volume of the data published on the topics and the application’s capacity. Moreover, the consumer applications should be designed to handle the messages in bulk to meet the SLAs.
Additionally, without batch processing, consumers have to poll regularly on the Kafka topics to get the messages individually. This approach puts pressure on the compute resources. Therefore, batch processing is much more efficient than single message processing per poll.
However, batch processing may not be suitable in certain cases where:
- The volume of messages is small
- Immediate processing is critical in time-sensitive application
- There’s a constraint on the compute and memory resources
- Strict message ordering is critical
3. Batch Processing by Using @KafkaListener Annotation
To understand batch processing, we’ll start by defining a use case. Then we’ll implement it first using basic message processing and then batch processing. This way, we can better appreciate the importance of processing messages in batch.
3.1. Use Case Description
Let’s assume that many critical IT infrastructure devices, such as servers and network devices, run in a company’s data center. Multiple monitoring tools keep track of these devices’ KPIs (Key Performance Indicators). Since the operations team wants to do proactive monitoring, they expect real-time actionable analytics. Hence, strict SLAs exist to transmit KPIs to the target analytics application.
The operations team configures the monitoring tools to send the KPIs, at regular intervals to a Kafka topic. A consumer application reads the messages from the topic and then pushes them to a data lake. An application reads the data from the data lake and generates real-time analytics.
Let’s implement a consumer with and without batch processing configured. We’ll analyze the differences and outcomes of both implementations.
3.2. Prerequisites
Before starting to implement batch processing, it’s crucial to understand the Spring Kafka library. Luckily, we’ve discussed this subject in the article, Intro to Apache Kafka with Spring, which provides us with the much-needed momentum.
For learning purposes, we’ll need a Kafka instance. Therefore, to get started quickly we’ll use embedded Kafka.
Lastly, we’ll require a program, to create an event queue in the Kafka broker and publish sample messages to it, at regular intervals. Essentially, we’ll use Junit5 to understand the concept.
3.3. Basic Listener
Let’s begin with a basic listener that reads messages one by one from the Kafka broker. We’ll define the ConcurrentKafkaListenerContainerFactory bean in the KafkaKpiConsumerWithNoBatchConfig configuration class:
public class KafkaKpiConsumerWithNoBatchConfig {
@Bean(name = "kafkaKpiListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaKpiBasicListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, String> factory
= new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory);
factory.setConcurrency(1);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
}
kafkaKpiBasicListenerContainerFactory() method returns the kafkaKpiListenerContainerFactory bean. The bean helps configure a basic listener which can process one message at a time:
@Component
public class KpiConsumer {
private CountDownLatch latch = new CountDownLatch(1);
private ConsumerRecord<String, String> message;
@Autowired
private DataLakeService dataLakeService;
@KafkaListener(
id = "kpi-listener",
topics = "kpi_topic",
containerFactory = "kafkaKpiListenerContainerFactory")
public void listen(ConsumerRecord<String, String> record) throws InterruptedException {
this.message = record;
latch.await();
List<String> messages = new ArrayList<>();
messages.add(record.value());
dataLakeService.save(messages);
//reset the latch
latch = new CountDownLatch(1);
}
//General getter methods
}
We’ve applied the @KafkaListener annotation to the listen() method. The annotation helps set the listener topic and the listener container factory bean. The java.util.concurrent.CountDownLatch object in the KpiConsumer class helps control the message processing in Junit5 tests. We’ll use it to understand the whole concept.
The CountDownLatch#await() method pauses the listener thread and the thread resumes when the test method calls the CountDownLatch#countDown() method. Without this, understanding and tracking the messages would be difficult. Finally, the downstream DataLakeService#save() method receives a single message for processing.
Let’s now take a look at the method that helps track the messages handled by the KpiListener class:
@RepeatedTest(10)
void givenKafka_whenMessage1OnTopic_thenListenerConsumesMessages(RepetitionInfo repetitionInfo) {
String testNo = String.valueOf(repetitionInfo.getCurrentRepetition());
assertThat(kpiConsumer.getMessage().value()).isEqualTo("Test KPI Message-".concat(testNo));
kpiConsumer.getLatch().countDown();
}
When monitoring tools publish KPI messages into the kpi_topic Kafka topic, the listener receives them in their order of arrival.
Each time the method executes, it tracks the messages arriving in the KpiListener#listen() method. After confirming the message order, it releases the latch and the listener finishes the processing.
3.4. Listener Capable of Batch Processing
Now, let’s explore batch processing support in Kafka. We’ll first define the ConcurrentKafkaListenerContainerFactory bean in the Spring configuration class:
@Bean(name="kafkaKpiListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaKpiBatchListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, String> factory
= new ConcurrentKafkaListenerContainerFactory();
Map<String, Object> configProps = new HashMap<>();
configProps.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "20");
consumerFactory.updateConfigs(configProps);
factory.setConcurrency(1);
factory.setConsumerFactory(consumerFactory);
factory.getContainerProperties().setPollTimeout(3000);
factory.setBatchListener(true);
return factory;
}
The method is similar to the kafkaKpiBasicListenerContainerFactory() method defined in the previous section. We’ve enabled batch processing by calling the ConsumerFactory#setBatchListener() method.
In addition, we’ve set the maximum number of messages per poll with the help of the ConsumerConfig.MAX_POLL_RECORDS_CONFIG property. The ConsumerFactory#setConcurrency() helps set the number of concurrent consumer threads, simultaneously processing the messages. We can refer to other configurations in the official Spring Kafka site.
Furthermore, there are configuration properties like ConsumerConfig.DEFAULT_FETCH_MAX_BYTES and
ConsumerConfig.DEFAULT_FETCH_MIN_BYTES that can help limit the message size as well.
Now, let’s look at the consumer:
@Component
public class KpiBatchConsumer {
private CountDownLatch latch = new CountDownLatch(1);
@Autowired
private DataLakeService dataLakeService;
private List<String> receivedMessages = new ArrayList<>();
@KafkaListener(
id = "kpi-batch-listener",
topics = "kpi_batch_topic",
batch = "true",
containerFactory = "kafkaKpiListenerContainerFactory")
public void listen(ConsumerRecords<String, String> records) throws InterruptedException {
records.forEach(record -> receivedMessages.add(record.value()));
latch.await();
dataLakeService.save(receivedMessages);
latch = new CountDownLatch(1);
}
// Standard getter methods
}
KpiBatchConsumer is similar to the KpiConsumer class defined earlier, except that the @KafkaListener annotation has an extra batch attribute. The listen() method takes the argument of type ConsumerRecords instead of ConsumerRecord. We can iterate through the ConsumerRecords object to fetch all the ConsumerRecord elements that are in the batch.
Listeners can also process messages received in a batch in the same order they arrive. However, maintaining order in message batches in Kafka across the partitions in a topic is complex.
Here ConsumerRecord represents the message published to the Kafka topic. Eventually, we call the DataLakeService#save() method with more messages. Lastly, the CountDownLatch class plays the same role as we saw earlier.
Let’s assume a hundred KPI messages are pushed into the kpi_batch_topic Kafka topic. We can now check the listener in action:
@RepeatedTest(5)
void givenKafka_whenMessagesOnTopic_thenListenerConsumesMessages() {
int messageSize = kpiBatchConsumer.getReceivedMessages().size();
assertThat(messageSize % 20).isEqualTo(0);
kpiBatchConsumer.getLatch().countDown();
}
Unlike the basic listener where messages are picked up one by one, this time the listener KpiBatchConsumer#listen() method receives a batch containing 20 KPI messages.
4. Conclusion
In this article, we discussed the difference between a basic Kafka listener and a listener enabled with batch processing. Batch processing helps handle multiple messages simultaneously to improve application performance. However, appropriate limits on the batch volume and message size are important in controlling the application’s performance. Hence they must be optimized after careful and rigorous benchmarking processes.
The source code used in this article is available over on GitHub.