Kafka is a distributed streaming platform which was created by LinkedIn and was later open-sourced and handed over to Apache Foundation. It has a huge vast network with active contributions from users and developers. Kafka is based on a distributed environment approach, which means it can run across multiple servers making it capable of using additional processing power and storage capacity.
Component of Kafka: Topic, Producer, Consumer and Brokers
In this article, we will be through the understanding of the need, application, prerequisites and a simple implementation of a Hello World program using Kafka.
Following are the few key aspects which justify the need for Kafka:
Following are the few application of Kafka :
Let us take an example to understand how a message is sent over topics in Kafka. Suppose we want to send a message ‘Hello World’ over the topic from scratch. To do so we will follow the following steps :
Note: Syntax for each step is out of scope for this blog. You will just get an idea of the flow of the exécution of the program on how to send the message over the topic.
Step-1: Start the Zookeeper Server
Step-2: Start the Kafka Server
Step-3: Creation of a topic
Step-4: Create a producer node
Step-5: Send a message using the producer node
Step-6: Create a consumer note and Subscribe to the topic
Following the above steps, the consumer node will be able to subscribe the message over the topic.
In order to learn Kafka, you must have a good understanding of the Distributed messaging system, Scala, Java, and Linux environment.
Kafka is for a professional who wants to make their career in big data analytics by using the Apache Kafka messaging system.