Open In App

Apache Kafka Producer

Improve
Improve
Like Article
Like
Save
Share
Report

Kafka Producers are going to write data to topics and topics are made of partitions. Now the producers in Kafka will automatically know to which broker and partition to write based on your message and in case there is a Kafka broker failure in your cluster the producers will automatically recover from it which makes Kafka resilient and which makes Kafka so good and used today. So if we look at a diagram to have the data in our topic partitions we’re going to have a producer on the left-hand side sending data into each of the partitions of our topics. 

 

So how does a producer know how to send the data to a topic partition? For this, we can use Message Keys. So alongside the message value, we can choose to send a message key and that key can be whatever you want it could be a string, it could be a number whatever you want and it turns out that if you don’t send the key, the key is set to null then the data will be sent in a Round Robin fashion to make it very simple. So that means that your first message is going to be sent to partition 0, and then your second message to partition 1 and then partition 2, and so on. This is why it’s called Round Robin, but in case you send a key with your message, all the messages that share the same key will always go to the same partition. So this is a very important property of Kafka because that means if you need ordering for a specific field.
For example, if you have cars and you want to get all the GPS positions in order for that particular car then you need to make sure to have your message key set as the unique identifier for your car i.e carID and so in our car GPS example that we have discussed in this article, Topics, Partitions, and Offsets in Apache Kafka, we need to choose the message key to be equal to carID so that we have all the car positions for that one specific car in order as part of the same partition. 

 

Note: Please refer to the Topic Example that has been discussed in this article, Topics, Partitions, and Offsets in Apache Kafka, so that you can understand which example we are discussing here. 

So the second example again if we have the producer sends data to 2 partitions and the key is carID then carID_123 will always go in partition 0, carID_234 as well will always go in partition 0 and carID_345 and carID_456 will always go in partition 1. The idea here again is that you will never find the carID_123 data in partition 1 because of this key property we just mentioned.

 

Apache Kafka Producer Example

In this example, we will be discussing how we can Produce messages to Kafka Topics with Spring Boot. Talking briefly about Spring Boot, it is one of the most popular and most used frameworks of Java Programming Language. It is a microservice-based framework and building a production-ready application using Spring Boot, takes very less time. Spring Boot makes it easy to create stand-alone, production-grade Spring-based Applications that you can “just run“. So let’s start with the implementation.

Prerequisite: Make sure you have installed Apache Kafka in your local machine. Refer to this article How to Install and Run Apache Kafka on Windows?

Step 1: Go to this link https://start.spring.io/ and create a Spring Boot project. Add the following dependencies to your Spring Boot project. 

  • Spring Web
  • Spring for Apache Kafka

Step 2: Now let’s create a controller class named DemoController.

Java




// Java Program to Illustrate Controller Class
 
package com.amiya.kafka.apachekafkaproducer;
 
// Importing required classes
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.*;
 
// Annotation
@RestController
 
// Class
public class DemoController {
 
    // Autowiring Kafka Template
    @Autowired KafkaTemplate<String, String> kafkaTemplate;
 
    private static final String TOPIC = "NewTopic";
 
    // Publish messages using the GetMapping
    @GetMapping("/publish/{message}")
    public String publishMessage(@PathVariable("message")
                                 final String message)
    {
 
        // Sending the message
        kafkaTemplate.send(TOPIC, message);
 
        return "Published Successfully";
    }
}


Step 3: Now we have to do the following things in order to publish messages to Kafka topics with Spring Boot

  1. Run the  Apache Zookeeper server
  2. Run the  Apache Kafka  server
  3. Listen to the messages coming from the new topics

Run your Apache Zookeeper server by using this command

C:\kafka>.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties

Similarly, run your Apache Kafka server by using this command

C:\kafka>.\bin\windows\kafka-server-start.bat .\config\server.properties

Run the following command to listen to the messages coming from the new topics 

C:\kafka>.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic NewTopic --from-beginning

Step 4: Now run your spring boot application. Make sure you have changed the port number in the application.properties file

server.port=8081

Let’s run the Spring boot application inside the ApacheKafkaProducerApplication file

Step 5: Browse this URL and pass your message after the /publish/.

http://localhost:8081/publish/GeeksforGeeks

As we have passed “GeeksforGeeks” here you can see we got “Published Successfully” in return. And in real-time you can see the message has been published on the server also. The streaming of the message is in real-time. 

Output

Similarly, if we have passed “Hello World” here you can see we got “Published Successfully” in return. And in real-time you can see the message has been published on the server also.

Output



Last Updated : 25 Aug, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads