
1. Overview
Apache Kafka has established itself as one of the most popular and widely used messaging systems for building event-driven architectures, where one microservice publishes a message to a topic, which another microservice consumes and processes asynchronously.
However, there are scenarios where a response is required immediately by the publisher microservice to proceed with further processing. While Kafka is inherently designed for asynchronous communication, it can be configured to support synchronous request-reply communication through separate topics.
In this tutorial, we’ll explore how to implement synchronous request-reply communication in a Spring Boot application using Apache Kafka.
2. Setting up the Project
For our demonstration, we’ll simulate a notification dispatch system. We’ll create a single Spring Boot application that will act as both the producer and the consumer.
2.1. Dependencies
Let’s start by adding the Spring Kafka dependency to our project’s pom.xml file:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>3.3.4</version>
</dependency>
This dependency provides us with the necessary classes to establish a connection and interact with the provisioned Kafka instance.
2.2. Defining Request-Reply Messages
Next, let’s define two records to represent our request and reply messages:
record NotificationDispatchRequest(String emailId, String content) {
}
public record NotificationDispatchResponse(UUID notificationId) {
}
Here, the NotificationDispatchRequest record holds the emailId and content of the notification, while the NotificationDispatchResponse record contains a unique notificationId that is generated after processing the request.
2.3. Defining Kafka Topics and Configuration Properties
Now, let’s define our request and reply Kafka topics. Additionally, we’ll configure a timeout duration for receiving a reply from the consumer component.
We’ll store these properties in our project’s application.yaml file and use @ConfigurationProperties to map the values to a Java record, which our configuration and service layers can reference:
@Validated
@ConfigurationProperties(prefix = "com.baeldung.kafka.synchronous")
record SynchronousKafkaProperties(
@NotBlank
String requestTopic,
@NotBlank
String replyTopic,
@NotNull @DurationMin(seconds = 10) @DurationMax(minutes = 2)
Duration replyTimeout
) {
}
We’ve also added validation annotations to ensure all the required properties are configured correctly. If any of the defined validations fail, the Spring ApplicationContext will fail to start up. This allows us to conform to the fail-fast principle.
Below is a snippet of our application.yaml file, which defines the required properties that will be mapped to our SynchronousKafkaProperties record automatically:
com:
baeldung:
kafka:
synchronous:
request-topic: notification-dispatch-request
reply-topic: notification-dispatch-response
reply-timeout: 30s
Here, we configure our request and reply Kafka topic names along with a reply-timeout of thirty seconds.
In addition to our custom properties, let’s add a few core Kafka configuration properties to our application.yaml file as well:
spring:
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS}
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
consumer:
group-id: synchronous-kafka-group
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring:
json:
trusted:
packages: com.baeldung.kafka.synchronous
properties:
allow:
auto:
create:
topics: true
First, to allow our application to connect to the provisioned Kafka instance, we configure its bootstrap server URL using an environment variable.
Next, we configure the key and value serialization and deserialization properties for both the consumer and producer. Additionally, for our consumer, we configure a group-id and trust the package containing our request-reply records for JSON deserialization.
On configuring the above properties, Spring Kafka automatically creates beans of type ConsumerFactory and ProducerFactory for us. We’ll use them to define additional Kafka configuration beans in the next section.
Lastly, we enable auto-creation of topics, so Kafka automatically creates them if they don’t exist. It’s important to note that we’ve only enabled this property for our demonstration — the same should not be done in production applications.
2.4. Defining Kafka Configuration Beans
With our configuration properties in place, let’s define the necessary Kafka configuration beans:
@Bean
KafkaMessageListenerContainer<String, NotificationDispatchResponse> kafkaMessageListenerContainer(
ConsumerFactory<String, NotificationDispatchResponse> consumerFactory
) {
String replyTopic = synchronousKafkaProperties.replyTopic();
ContainerProperties containerProperties = new ContainerProperties(replyTopic);
return new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
}
First, we inject the ConsumerFactory instance and use it along with the configured replyTopic to create a KafkaMessageListenerContainer bean. This bean is responsible for creating a container that polls messages from our reply topic.
Next, we’ll define the core bean that we’ll use in our service layer to perform synchronous communication:
@Bean
ReplyingKafkaTemplate<String, NotificationDispatchRequest, NotificationDispatchResponse> replyingKafkaTemplate(
ProducerFactory<String, NotificationDispatchRequest> producerFactory,
KafkaMessageListenerContainer<String, NotificationDispatchResponse> kafkaMessageListenerContainer
) {
Duration replyTimeout = synchronousKafkaProperties.replyTimeout();
var replyingKafkaTemplate = new ReplyingKafkaTemplate<>(producerFactory, kafkaMessageListenerContainer);
replyingKafkaTemplate.setDefaultReplyTimeout(replyTimeout);
return replyingKafkaTemplate;
}
Using the ProducerFactory and the earlier defined KafkaMessageListenerContainer bean, we create a ReplyingKafkaTemplate bean. Additionally, using the autowired synchronousKafkaProperties, we configure the reply-timeout that we’ve defined in our application.yaml file, which will determine how long our service will wait for a response before timing out.
This ReplyingKafkaTemplate bean manages the interactions between the request and reply topics, making synchronous communication over Kafka possible.
Lastly, let’s define beans to enable our listener component to send responses back to the reply topic:
@Bean
KafkaTemplate<String, NotificationDispatchResponse> kafkaTemplate(ProducerFactory<String, NotificationDispatchResponse> producerFactory) {
return new KafkaTemplate<>(producerFactory);
}
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, NotificationDispatchRequest>> kafkaListenerContainerFactory(
ConsumerFactory<String, NotificationDispatchRequest> consumerFactory,
KafkaTemplate<String, NotificationDispatchResponse> kafkaTemplate
) {
var factory = new ConcurrentKafkaListenerContainerFactory<String, NotificationDispatchRequest>();
factory.setConsumerFactory(consumerFactory);
factory.setReplyTemplate(kafkaTemplate);
return factory;
}
First, we create a standard KafkaTemplate bean using the ProducerFactory bean.
Then, we use it along with the ConsumerFactory bean to define the KafkaListenerContainerFactory bean. This bean enables our listener components that consume messages from the request topic to send a message back to the reply topic after the required processing is completed.
3. Implementing Synchronous Communication With Kafka
With our configuration in place, let’s implement a synchronous request-reply communication between our two configured Kafka topics.
3.1. Sending and Receiving Messages Using ReplyingKafkaTemplate
First, let’s create a NotificationDispatchService class that sends messages to the configured request topic using the ReplyingKafkaTemplate bean we defined earlier:
@Service
@EnableConfigurationProperties(SynchronousKafkaProperties.class)
class NotificationDispatchService {
private final SynchronousKafkaProperties synchronousKafkaProperties;
private final ReplyingKafkaTemplate<String, NotificationDispatchRequest, NotificationDispatchResponse> replyingKafkaTemplate;
// standard constructor
NotificationDispatchResponse dispatch(NotificationDispatchRequest notificationDispatchRequest) {
String requestTopic = synchronousKafkaProperties.requestTopic();
ProducerRecord<String, NotificationDispatchRequest> producerRecord = new ProducerRecord<>(requestTopic, notificationDispatchRequest);
var requestReplyFuture = replyingKafkaTemplate.sendAndReceive(producerRecord);
return requestReplyFuture.get().value();
}
}
Here, in our dispatch() method, we use the autowired synchronousKafkaProperties instance to extract the requestTopic configured in our application.yaml file. Then, we use it along with the notificationDispatchRequest passed in the method’s argument to create a ProducerRecord instance.
Next, we pass the created ProducerRecord instance to the sendAndReceive() method to publish the message to the request topic. The method returns a RequestReplyFuture object, which we use to wait for a response back and then return its value.
Under the hood, when we call the sendAndReceive() method, the ReplyingKafkaTemplate class generates a unique correlation ID, which is a random UUID, and attaches it to the outgoing message’s header. Additionally, it adds a header containing the reply topic name in which it expects the response back. Remember that we’ve already configured the reply topic in the KafkaMessageListenerContainer bean.
The ReplyingKafkaTemplate bean uses the generated correlation ID as a key to store the RequestReplyFuture object in a thread-safe ConcurrentHashMap. This allows it to work even in multi-threaded environments and support concurrent requests.
3.2. Defining the Kafka Message Listener
Next, to complete our implementation, let’s create a listener component that listens to messages in the configured request topic and sends back a response to the reply topic:
@Component
class NotificationDispatchListener {
@SendTo
@KafkaListener(topics = "${com.baeldung.kafka.synchronous.request-topic}")
NotificationDispatchResponse listen(NotificationDispatchRequest notificationDispatchRequest) {
// ... processing logic
UUID notificationId = UUID.randomUUID();
return new NotificationDispatchResponse(notificationId);
}
}
We use the @KafkaListener annotation to listen to the request topic configured in our application.yaml file.
Inside our listen() method, we simply return a NotificationDispatchResponse record containing a unique notificationId.
Importantly, we annotate our method with the @SendTo annotation, which instructs Spring Kafka to extract the correlation ID and reply topic name from the message headers. It uses them to automatically send the method’s return value to the extracted reply topic and adds the same correlation ID to the message header.
This allows the ReplyingKafkaTemplate bean in our NotificationDispatchService class to fetch the correct RequestReplyFuture object using the correlation ID it originally generated.
4. Conclusion
In this article, we’ve explored using Apache Kafka to implement synchronous communication between two components in a Spring Boot application.
We walked through the necessary configurations and simulated a notification dispatch system.
By using ReplyingKafkaTemplate, we can convert the asynchronous nature of Apache Kafka into a synchronous request-reply pattern. This approach is a little unconventional, so it’s important to carefully evaluate whether it aligns with the project’s architecture before implementing it in production.
As always, all the code examples used in this article are available over on GitHub.
The post Synchronous Communication With Apache Kafka Using ReplyingKafkaTemplate first appeared on Baeldung.