Https://engineering.linkedin.com/blog/2016/05/open-sourcing-kafka-monitor
Https://github.com/linkedin/kafka-monitor
Https://github.com/Microsoft/Availability-Monitor-for-Kafka
Design Overview
Kafka Monitor makes it easy-develop and execute long-running kafka-specific system tests in real clusters and to Monito R existing Kafka deployment ' s SLAs provided by users.
Developers can create new tests by composing reusable modules to emulate various scenarios (e.g. GC pauses, broker Hard-ki LLS, rolling bounces, disk failures, etc.) and collect metrics; Users can run Kafka Monitor tests that execute these scenarios at a user-defined schedule on a test cluster or production Cluster and validate that Kafka still functions as expected in these scenarios. Kafka Monitor is modeled as manager for a collection of tests and services in order to achieve these.
A given Kafka Monitor instance runs in a single Java process and can spawn multiple tests/services in the same process. The diagram below demonstrates the relations between service, test and Kafka Monitor instance, as well as how Kafka Monito R interacts with a Kafka cluster and user.
The interesting thing about this platform is that it's not just about monitoring,
It also contains the complete test framework, which can be defined as any test,test by a variety of service, i.e., components.
- Produce service, which produces messages to Kafka and measures metrics such as produce rate and availability.
- Consume service, which consumes messages from Kafka and measures metrics including message loss rate, message duplicate RA Te and end-to-end latency. This service depends the produce service to provide messages a message embed number and sequence.
- Broker Bounce Service, which bounces a given broker at some pre-defined schedule.
With the 3 services above, you can assemble a test broker bounce
Or the case above, through two Kafka monitor, you can test the synchronization between multiple datacenter
Kafka Monitor Usage at LinkedIn
Monitoring Kafka Cluster deployments
In early deployed Kafka monitor to monitor availability and end-to-end latency of every Kafka cluster at LinkedIn. This project wiki goes into the details about how these metrics is measured. These basic but critical metrics has been extremely useful to actively monitor the SLAs provided by our Kafka cluster dep Loyment.
Validate Client Libraries Using end-to-end Workflows
As an earlier blog post explains, we had a client library that wraps around the vanilla Apache Kafka producer and consume R to provide various features that is not available in Apache Kafka such as Avro encoding, auditing and support for large Messages. We also have a REST client, that allows Non-java application to produce, and consume from Kafka. It is important to validate the functionality of these client libraries with each new Kafka release. Kafka Monitor allows users to plug on custom client libraries to being used in its end-to-end workflow. We have deployed Kafka Monitor instances This use our wrapper client and REST client in tests, to validate that their perf Ormance and functionality meet the requirement for every new release of these client libraries and Apache Kafka.
Certify New Internal releases of Apache Kafka
We generally run off Apache Kafka trunk and cut a new internal release every quarter or so to pick up new features from Apache Kafka. A significant benefit of running off trunk is the deploying Kafka in LinkedIn's production cluster has often detected pro Blems in Apache Kafka trunk so can be fixed before official Apache Kafka releases.
Given the risk of running off Apache Kafka trunk, we take extra care to certify every internal release in a test cluster-w Hich accepts traffic mirrored from production cluster (s)-for a few weeks before deploying the new release in production. For example, we does rolling bounces or hard kill brokers, while checking JMX metrics to verify that there is exactly one co Ntroller and no offline partitions, in order to validate Kafka ' s availability under failover scenarios. In the past, these steps were manual, which are very time-consuming and doesn ' t scale well with the number of events and Ty PES of scenarios we want to test. We is switching to Kafka Monitor to automate this process and cover more failover scenarios on a continual basis.
Open Sourcing Kafka Monitor