Site icon Techtudes

Kafka-Springboot-microservices

Kafka is an open-source distributed event streaming platform used for building real-time data pipelines and streaming applications.

Provides features like fault tolerance, horizontal scalability, and low-latency message delivery, making it suitable for use cases such as real-time analytics, log aggregation, and data integration across systems. Kafka’s architecture consists of producers that publish data to topics, consumers that subscribe to topics to process data, and brokers that manage the storage and distribution of data across clusters.

Installation

  For installation of download and follow the steps from apache kafka quick start here

After download the kafka tgz file in mac please follow the command to start the zookeeper and kafka in two different terminal.

Once it get started in different terminal open new terminal you can start consumer , similarly in other terminal start the subscriber.

Integrating Kafka with Spring Boot enables the creation of highly responsive and scalable real-time applications. Leveraging Kafka’s distributed event streaming platform within Spring Boot offers several advantages:

  1. Event-Driven Architecture: Kafka facilitates an event-driven architecture, where applications communicate through asynchronous events. Spring Boot’s support for event-driven programming aligns seamlessly with Kafka’s capabilities, enabling the development of loosely coupled and highly responsive systems.
  2. Scalability and Fault Tolerance: Kafka’s distributed nature allows for horizontal scalability and fault tolerance. Spring Boot applications can easily integrate with Kafka clusters, ensuring reliable message processing and high availability even under heavy loads.
  3. Stream Processing and Data Pipelines: Kafka enables the creation of robust data pipelines for real-time stream processing. Spring Boot applications can consume, process, and produce Kafka messages, allowing for the seamless integration of data processing logic into the application workflow.
  4. Microservices Communication: Kafka serves as a reliable messaging backbone for communication between microservices in distributed systems. Spring Boot microservices can publish and consume messages via Kafka topics, enabling decoupled communication and improved system resilience.
  5. Integration with Spring Ecosystem: Spring Boot offers extensive support for Kafka integration through Spring Kafka and Spring Cloud Stream frameworks. These integrations provide simplified configuration, robust error handling, and seamless integration with other Spring components.

Kafka, ZooKeeper, topics, and consumer groups are fundamental components in building distributed, scalable, and fault-tolerant messaging systems. Here’s a brief overview of each:

  1. Kafka:
    • Kafka is a distributed streaming platform designed to handle real-time data feeds and processing.
    • It allows publishers to write data to topics and enables subscribers (consumers) to read and process these data streams.
    • Kafka offers high throughput, fault tolerance, horizontal scalability, and low-latency message delivery.
  2. ZooKeeper:
    • ZooKeeper is a centralized service for maintaining configuration information, providing distributed synchronization, and managing group services.
    • Kafka uses ZooKeeper for cluster coordination, metadata management, leader election, and maintaining partition states.
    • ZooKeeper ensures that Kafka brokers are aware of each other’s existence, manage topics and partitions, and coordinate consumer group membership.
  3. Topics:
    • Topics are logical channels or categories to which messages are published by producers and from which messages are consumed by consumers.
    • Each topic in Kafka is divided into one or more partitions, which are distributed across Kafka brokers.
    • Topics can be configured with replication factors to ensure fault tolerance and high availability of data.
  4. Consumer Groups:
    • Consumer groups are logical groupings of Kafka consumers that jointly consume and process messages from one or more topics.
    • Within a consumer group, each consumer is assigned to one or more partitions of a topic.
    • Kafka ensures that each partition is consumed by only one consumer within the same consumer group, enabling parallel processing of messages.
    • Consumer groups enable horizontal scaling of consumers and ensure fault tolerance and load balancing within the group.

Start zookeeper.bin/zookeeper-server-start.sh config/zookeeper.properties

Start Kafka Server.bin/kafka-server-start.sh config/server.properties

Create Topic : – bin/kafka-topics.sh --create --topic test-kafka --bootstrap-server localhost:9092

Produce Data : –

bin/kafka-console-producer.sh --topic test-kafka --bootstrap-server localhost:9092

Consume Data : –

afka_2.13-3.7.0 % bin/kafka-console-consumer.sh --topic test-kafka --from-beginning --bootstrap-server localhost:9092

Lets integrate in Spring Boot

Create  two application one is producer and one is consumer , in producer create a bean which will create new topic. And from business layer you can use its object to send data.

@Configuration
public class KafakConfig {

	@Bean
	public NewTopic topic() {
		return TopicBuilder.name("cab-location").build();
	}
}

Once you test it in terminal lets go to the application , in pom.xml add the following dependency 

<dependency>
  <groupId>org.springframework.kafka</groupId>
 </dependency>

For consumer the properties required is following. Please find out the groupId that available in console for the specific topic.

server.port=8081
spring.kafka.consumer.bootstrap-server = localhost:9092
spring.kafka.consumer.key-serializer = org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.value-serializer = org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.group-id = console-consumer

Similarly in producer you have to add the following dependency

server.port=8082
spring.kafka.producer.bootstrap-server = localhost:9092
spring.kafka.producer.key-serializer = org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer = org.apache.kafka.common.serialization.StringSerializer

Create producer to produce data as follows

@Autowired
private KafkaTemplate<String, Object> kafakTemplate;

public boolean updateLocation(String location) {
   kafakTemplate.send("cab-location",location);
   return true;
}

In consumer application use @KafkaListener(topics = “cab-location”, groupId = “console-consumer”)

@KafkaListener(topics = "cab-location", groupId = "console-consumer")
	public void cabLocation(String location) {
		System.out.println(location);
	}

Now once you will produce data to the kafka topic consumer  will automatically consume.

Please find the detailed application from github here

Exit mobile version