EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 360+ Courses All in One Bundle
  • Login
Home Data Science Data Science Tutorials Kafka Tutorial Kafka burrow
Secondary Sidebar
Kafka Tutorial
  • Basic
    • What is Kafka?
    • Kafka Applications
    • Kafka Version
    • Kafka Use Cases
    • Kafka Consumer Group
    • Kafka Tools
    • Kafka Architecture
    • Kafka MirrorMaker
    • Kafka Console Producer
    • Kafka Console Consumer
    • Kafka Node
    • Kafka Listener
    • Kafka Cluster
    • Kafka Partition
    • Kafka Event
    • Kafka Replication
    • Kafka Monitoring
    • Kafka Zookeeper
    • Kafka Connect
    • Kafka Partition Key
    • Kafka Topic
    • Kafka burrow
    • Kafka Delete Topic
    • Kafka Replication Factor
    • Kafka Interview Questions
    • Kafka Alternatives
    • Kafka Queue
    • Kafka message
    • Kafka offset
    • Kafka Manager
    • Kafka Rebalance
    • Kafka Port
    • Kafka JDBC Connector
    • Kafka Security
    • Kafka confluent
    • Kafka Consumer
    • Kafka Producer
    • Kafka Client
    • Kafka Producer Config
    • Kafka Exporter
    • Kafka WebSocket
    • Kafka REST Proxy
    • Kafka Protocol
    • Kafka Producer Example

Related Courses

All in One Data Science Course

Pig Certification Course

Scala Certification Course

Kafka burrow

Kafka burrow

Introduction to Kafka burrow

The basic of Kafka is originally developed by LinkedIn. Similarly, the Kafka burrow has come under the LinkedIn project only. The Kafka burrow is providing the monitoring solution for Apache Kafka. Majorly, it will provide the monitoring solution for the consumer lag checks. Here, the Kafka burrow is providing as a service. In the burrow, there is no need to specify any threshold value. It will monitor the committed offsets of the Kafka consumers. On-demand, It will also compute the status of the Kafka consumer also. To suffice the on-demand requirement, we need to use the HTTP API call and need to provide the other information of the Kafka cluster as well like the zookeeper information, cluster details, port information, etc. In the Kafka burrow, we are having other notifies also like the email communication, etc.

Syntax of Kafka burrow

As such, there is no specific syntax exist for Kafka burrows. To work with the Kafka burrows, we need to understand the complete architecture of it. Similarly, we also need to understand the working flow of it. In Kafka burrows, we are using the number of HTTP components also. As per the requirement or need, we will install the Kafka burrow environment and configure it accordingly. While installation the Kafka burrows environment, we need to get the necessary file for the GitHub repo. To configure it, we need to use the install and the go utility. But majorly we need to use the docker concept to run the Kafka the burrow environment. Once the installation will compete we need to configure the Kafka burrow environment and access the environment from the browser.

Note: Generally, we are using the 8000 port to access the Kafka burrow environment.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

How Kafka burrow Works?

As we have discussed, the Kafka burrows are providing the monitoring functionality for the Kafka environment. It is majorly useful to check the lag in the Kafka consumer. We are using the Kafka ecosystem for streaming purposes. In streaming, we need a quick response from the producer as well as from the consumer (the application or the tool that is consuming the data). Here, if we will face any lag or delay then at the end level we are not getting the valuable data. Hence, we need to fix it and need to identify is there any lag while fetching the data or consuming the data. Here, the Kafka burrow comes into the picture. It will provide the UI. On the same UI, it will display the detailed information of the lag like is there any lag is present or not.

Kafka burrow

The Kafka burrow project has come under the Data Infrastructure Streaming SRE team. The team has come under LinkedIn. The above dataflow is representing how the Kafka burrows work. The burrow will atomically monitor all the Kafka consumers and the partition that they are consuming. Then the burrow will keep all the detailed information of the Kafka consumer at the centralized level. This information is distinct as compared to the single consumer information. Here, there is a sliding window concept that comes into the picture. As per the evaluation, the consumer status is determined by the Kafka consumer behavior. In each Kafka partition will check on the basics of the below information like it is consumer committed offset or not, the consumer offset value is increasing or not, lag is increasing or not, the lag is increasing frequently or constant. As per this check, they can state the consumer status like OK (if the status is OK then no action needed), if the status is WARNING (if the status is WARNING then it is falling but the consumer is working fine). If the status is ERROR (if the status is ERROR then the consumer is not working). All this is handling by the simple HTTP call. As per the API call, we will get the necessary alerts.

Below are the features of the Kafka burrow

1. In the Kafka burrow environment, there is no need to define the threshold value. The Groups will evaluate over the sliding window.

All in One Data Science Bundle(360+ Courses, 50+ projects)
Python TutorialMachine LearningAWSArtificial Intelligence
TableauR ProgrammingPowerBIDeep Learning
Price
View Courses
360+ Online Courses | 50+ projects | 1500+ Hours | Verifiable Certificates | Lifetime Access
4.7 (86,584 ratings)

2. The Kafka burrow will support the multiple Kafka Cluster and different Kafka environments also.

3. In the Kafka environment, it will automatically monitor all Kafka consumers. For the monitoring, it will use the Kafka committed offset values.

4. Similarly to Kafka, we are able to configure the Zookeeper committed offset.

5. Similarly to the zookeeper, we are able to configure the Storm committed offset.

6. It will also support the HTTP endpoint for consumer group status. Similarly, it will also support the broker as well as the consumer information.

7. The Kafka burrow will provide the alert functionality also. We are able to configurable the email alert also. We can set the same email alert to the respective stakeholders also.

8. In Kafka burrow alert, we are able to configurable the HTTP client alert also. We can send the same alert to another application, tool, or system for all groups as well as respective stakeholders.

Examples to understand Kafka burrow

Kafka Burrow Concept
As such, there is a specific command available to work with the Kafka burrow. It is just a monitoring platform that it will help to monitor the Kafka consumer. It will help to check, is there any lag or not. The information is visible on the UI.

Conclusion

We have seen the uncut concept of the “Kafka burrow” with the proper explanation. The Kafka burrows are nothing but a monitoring tool for the Kafka consumer. It will atomically monitor the consumer lag without setuping the threshold value in it. If any issues happen then it will send the alert notification to the respective stakeholders.

Recommended Articles

This is a guide to Kafka burrow. Here we discuss definition, syntax, How Kafka burrow Works? example with code implementation. You may also have a look at the following articles to learn more –

  1. Kafka Listener
  2. Kafka Zookeeper
  3. Kafka Console Producer
  4. Kafka Node
Popular Course in this category
Apache Kafka Training (1 Course, 1 Project)
  1 Online Courses |  1 Hands-on Project |  7+ Hours |  Verifiable Certificate of Completion
4.5
Price

View Course

Related Courses

All in One Data Science Bundle (360+ Courses, 50+ projects)4.9
Apache Pig Training (2 Courses, 4+ Projects)4.8
Scala Programming Training (3 Courses,1Project)4.7
0 Shares
Share
Tweet
Share
Primary Sidebar
Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Database Management
  • Machine Learning
  • All Tutorials
Certification Courses
  • All Courses
  • Data Science Course - All in One Bundle
  • Machine Learning Course
  • Hadoop Certification Training
  • Cloud Computing Training Course
  • R Programming Course
  • AWS Training Course
  • SAS Training Course

ISO 10004:2018 & ISO 9001:2015 Certified

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Data Science Course

SPSS, Data visualization with Python, Matplotlib Library, Seaborn Package

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Data Science Course

Hadoop, Data Science, Statistics & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more