1. Background
Originating from LinkedIn, open source in Apache, distributed messaging system based on publish subscription.
2. Features
High throughput: Hundreds of MB/s read/write per second
Message persistence
High scalability
High reliability
Support for multi-consumer (this is a more important feature)
3. The topology Broker:kafka cluster contains one or more servers, which are called broker
Producer: Responsible for publishing messages to Kafka broker
Consumer: Message consumer, client that reads messages to Kafka brokerConsumer Group:Each consumer belongs to a specific consumer Group
1. Broker, producer, consumer these three are available in all message queues. Here is a note of the concept,,,, consumer group
hadoop cluster
real-time monitoring
otherservice
datawarehouse
is four different clusters, each cluster has hundreds of consumers, but this time if a message is sent helloworld
, these four clusters only four consumer
The client can receive this message, and only one client in each cluster consumer
can consume the message. This is how Kafka supports multiple consumers .
2. Kafka uses zookeeper to do the configuration center, which is used to coordinate the relationship between nodes and consumer. But the line in the figure can be seen Kafka producer is not connected to zookeeper .
4. Basic Concepts
There are three basic concepts of comparison.
Topic
a logical queue;
Patition
Physically Topic
divide into multiple Partition
;
A topic is distributed across multiple brokers (for load balancing and backup, many distributed components have this design, for example mongodb
sharding
).
1. For example, suppose our Kafka cluster has 3 brokers, created 1 topic, this topic when we created it to specify its partition as 3, this time partition will be evenly distributed to each broker, 1 of which broker
has a partition
( physically separate ), but these three partition still belong to the same topic
( logically a queue ).
2.Kafka only guarantees an orderly partition level .
5. Applicable scenarios
Kafka in the industry's use is mainly used to deal with the log, because such as Flink, Storm, spark these big data middleware and Kafka docking very well, can also be used to do business logic processing, mainly multi-consumer situation, students can combine their own situation to do the project design.
The previous time, "architect" above pushed the "good log Unified platform", which is like the team's log processing system, the other log processing system is similar. This includes log access, log transfer, log processing, and log storage. In the log processing due to the support of multi-consumer, you can use spark to do real-time data analysis, can also be directly here to do simple processing to do important log backups, can also be based on business needs offline log analysis.
Reference
- Kafka Official documents
- Like Log unified Platform
Introduction to "original" Kafka