Open In App

Apache Kafka – Create Consumer with Threads using Java

Improve
Improve
Like Article
Like
Save
Share
Report

Threads are a subprocess with lightweight with the smallest unit of processes and also have separate paths of execution. These threads use shared memory but they act independently hence if there is an exception in threads that do not affect the working of other threads despite them sharing the same memory. Kafka Consumer is used to reading data from a topic and remembering a topic again is identified by its name. 

So the consumers are smart enough and they will know which broker to read from and which partitions to read from. And in case of broker failures, the consumers know how to recover and this is again a good property of Apache Kafka. Now data for the consumers is going to be read in order within each partition. In this article, we are going to discuss the step-by-step implementation of how to Create an Apache Kafka Consumer with Threads using Java.

Step-by-Step Implementation

Step 1: Create a New Apache Kafka Project in IntelliJ

To create a new Apache Kafka Project in IntelliJ using Java and Maven please refer to How to Create an Apache Kafka Project in IntelliJ using Java and Maven.

Step 2: Install and Run Apache Kafka

To Install and Run Apache Kafka in your local system please refer to How to Install and Run Apache Kafka.

Step 3: Create a Consumer with Threads

Working with ConsumerThread Class:

Java




public class ConsumerThread implements Runnable {
  
        private final CountDownLatch latch;
        KafkaConsumer<String, String> consumer;
        private final Logger logger = LoggerFactory.getLogger(ConsumerThread.class);
  
        public ConsumerThread(String topic, String bootstrapServer, String groupId, CountDownLatch latch) {
  
            this.latch = latch;
  
            // Create Consumer Properties
            Properties properties = new Properties();
            properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
            properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
            properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
            properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
            properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
  
            // Create Consumer
            consumer = new KafkaConsumer<>(properties);
  
            // Subscribe Consumer to Our Topics
            consumer.subscribe(List.of(topic));
        }
  
        @Override
        public void run() {
            try {
                // Poll the data
                while (true) {
                    ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
  
                    for (ConsumerRecord<String, String> record : records) {
                        logger.info("Key: " + record.key() +
                                " Value: " + record.value() +
                                " Partition: " + record.partition() +
                                " Offset: " + record.offset()
                        );
                    }
                }
            } catch (WakeupException e) {
                logger.info("Received shutdown signal");
            } finally {
                consumer.close();
                // Tell our main code
                // We are done
                // with the consumer
                latch.countDown();
            }
        }
  
        public void shutDown() {
            // The wakeup() method is used
            // to interrupt consumer.poll()
            // It will throw WakeUpException
            consumer.wakeup();
        }
    }


Inside the run() method:

Java




private void run() {
        Logger logger = LoggerFactory.getLogger(KafkaProducerWithKeyDemo.class);
  
        String bootstrapServer = "127.0.0.1:9092";
        String groupId = "my-third-gfg-group";
        String topic = "gfg_topic";
  
        // CountDownLatch for dealing with multiple threads
        CountDownLatch latch = new CountDownLatch(1);
  
        // Create the Consumer Runnable
        logger.info("Creating the consumer thread");
        ConsumerThread myConsumerThread = new ConsumerThread(topic, bootstrapServer, groupId, latch);
  
        // Start the Thread
        Thread myThread = new Thread(myConsumerThread);
        myThread.start();
  
        // Add a Shutdown Hook
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            logger.info("Caught shutdown hook");
            myConsumerThread.shutDown();
            try {
                latch.await();
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
            logger.info("Application has exited");
        }
  
        ));
  
        try {
            latch.await();
        } catch (InterruptedException e) {
            logger.error("Application got interrupted", e);
        } finally {
            logger.info("Application is Closing");
        }
    }


Below is the complete code. Comments are added inside the code to understand the code in more detail.

Java




package org.kafkademo.basics;
  
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.errors.WakeupException;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
  
import java.time.Duration;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
  
public class KafkaConsumerWithTreadDemo {
  
    public static void main(String[] args) {
        new KafkaConsumerWithTreadDemo().run();
    }
  
    public KafkaConsumerWithTreadDemo() {
    }
  
    private void run() {
        Logger logger = LoggerFactory.getLogger(KafkaProducerWithKeyDemo.class);
  
        String bootstrapServer = "127.0.0.1:9092";
        String groupId = "my-third-gfg-group";
        String topic = "gfg_topic";
  
        // CountDownLatch for dealing with multiple threads
        CountDownLatch latch = new CountDownLatch(1);
  
        // Create the Consumer Runnable
        logger.info("Creating the consumer thread");
        ConsumerThread myConsumerThread = new ConsumerThread(topic, bootstrapServer, groupId, latch);
  
        // Start the Thread
        Thread myThread = new Thread(myConsumerThread);
        myThread.start();
  
        // Add a Shutdown Hook
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            logger.info("Caught shutdown hook");
            myConsumerThread.shutDown();
            try {
                latch.await();
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
            logger.info("Application has exited");
        }
  
        ));
  
        try {
            latch.await();
        } catch (InterruptedException e) {
            logger.error("Application got interrupted", e);
        } finally {
            logger.info("Application is Closing");
        }
    }
  
    public class ConsumerThread implements Runnable {
  
        private final CountDownLatch latch;
        KafkaConsumer<String, String> consumer;
        private final Logger logger = LoggerFactory.getLogger(ConsumerThread.class);
  
        public ConsumerThread(String topic, String bootstrapServer, String groupId, CountDownLatch latch) {
  
            this.latch = latch;
  
            // Create Consumer Properties
            Properties properties = new Properties();
            properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
            properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
            properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
            properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
            properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
  
            // Create Consumer
            consumer = new KafkaConsumer<>(properties);
  
            // Subscribe Consumer to Our Topics
            consumer.subscribe(List.of(topic));
        }
  
        @Override
        public void run() {
            try {
                // Poll the data
                while (true) {
                    ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
  
                    for (ConsumerRecord<String, String> record : records) {
                        logger.info("Key: " + record.key() +
                                " Value: " + record.value() +
                                " Partition: " + record.partition() +
                                " Offset: " + record.offset()
                        );
                    }
                }
            } catch (WakeupException e) {
                logger.info("Received shutdown signal");
            } finally {
                consumer.close();
                // Tell our main code
                // We are done
                // with the consumer
                latch.countDown();
            }
        }
  
        public void shutDown() {
            // The wakeup() method is used
            // to interrupt consumer.poll()
            // It will throw WakeUpException
            consumer.wakeup();
        }
    }
  
}


Step 4: Run the Application

Now run the application and below is the output.

"C:\Users\Amiya Rout\.jdks\corretto-11.0.15\bin\java.exe" "
[main] INFO org.kafkademo.basics.KafkaProducerWithKeyDemo - Creating the consumer thread
[main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = earliest
    bootstrap.servers = [127.0.0.1:9092]
    check.crcs = true
    client.dns.lookup = use_all_dns_ips
    client.id = consumer-my-third-gfg-group-1
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = true
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = my-third-gfg-group
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    internal.throw.on.fetch.stable.offset.unsupported = false
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.8.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: ebb1d6e21cc92130
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1674840914451
[main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Subscribed to topic(s): gfg_topic
[Thread-0] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Cluster ID: orhF-HNsR465cORhmU3pTg
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Discovered group coordinator LAPTOP-FT9V6MVP:9092 (id: 2147483647 rack: null)
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] (Re-)joining group
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] (Re-)joining group
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Successfully joined group with generation Generation{generationId=1, memberId='consumer-my-third-gfg-group-1-0c7840be-8f00-4ad4-8dd8-713c05b359f7', protocol='range'}
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Finished assignment for group at generation 1: {consumer-my-third-gfg-group-1-0c7840be-8f00-4ad4-8dd8-713c05b359f7=Assignment(partitions=[gfg_topic-0])}
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Successfully synced group in generation Generation{generationId=1, memberId='consumer-my-third-gfg-group-1-0c7840be-8f00-4ad4-8dd8-713c05b359f7', protocol='range'}
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Notifying assignor about the new Assignment(partitions=[gfg_topic-0])
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Adding newly assigned partitions: gfg_topic-0
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Found no committed offset for partition gfg_topic-0
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Resetting offset for partition gfg_topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[LAPTOP-FT9V6MVP:9092 (id: 0 rack: null)], epoch=0}}.
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_0 Value: hello_geeksforgeeks 0 Partition: 0 Offset: 0
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_1 Value: hello_geeksforgeeks 1 Partition: 0 Offset: 1
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_2 Value: hello_geeksforgeeks 2 Partition: 0 Offset: 2
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_3 Value: hello_geeksforgeeks 3 Partition: 0 Offset: 3
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_4 Value: hello_geeksforgeeks 4 Partition: 0 Offset: 4
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_5 Value: hello_geeksforgeeks 5 Partition: 0 Offset: 5
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_6 Value: hello_geeksforgeeks 6 Partition: 0 Offset: 6
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_7 Value: hello_geeksforgeeks 7 Partition: 0 Offset: 7
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_8 Value: hello_geeksforgeeks 8 Partition: 0 Offset: 8
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Key: id_9 Value: hello_geeksforgeeks 9 Partition: 0 Offset: 9
[Thread-1] INFO org.kafkademo.basics.KafkaProducerWithKeyDemo - Caught shutdown hook
[Thread-0] INFO org.kafkademo.basics.KafkaConsumerWithTreadDemo$ConsumerThread - Received shutdown signal
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Revoke previously assigned partitions gfg_topic-0
[Thread-0] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-my-third-gfg-group-1, groupId=my-third-gfg-group] Member consumer-my-third-gfg-group-1-0c7840be-8f00-4ad4-8dd8-713c05b359f7 sending LeaveGroup request to coordinator LAPTOP-FT9V6MVP:9092 (id: 2147483647 rack: null) due to the consumer is being closed
[Thread-0] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed
[Thread-0] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter
[Thread-0] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed
[Thread-0] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.consumer for consumer-my-third-gfg-group-1 unregistered
[main] INFO org.kafkademo.basics.KafkaProducerWithKeyDemo - Application is Closing
[Thread-1] INFO org.kafkademo.basics.KafkaProducerWithKeyDemo - Application has exited

Process finished with exit code 130


Last Updated : 18 Mar, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads