Kafka is how to achieve high throughput rates.original 2016-02-27 Du Yishu Performance and architecture
Kafka is a distributed messaging system that needs to handle massive amounts of messages, and Kafka's design is to write all of the messages to a low-capacity hard drive in exchange for a stronger storage capability, but in fact, using the hard drive does not cause excessive performance loss
Kafka mainly uses the following methods to achieve ultra-high throughput rate
sequential Read and write
The Kafka message is constantly appended to the file, which allows Kafka to take full advantage of the disk's sequential read and write performance
Sequential read and write does not require the drive head to seek time, only a small number of sector rotation time, so much faster than random read and write
Kafka official gave the test data (raid-5,7200rpm):
Sequential I/O: 600mb/s
Random I/O: 100kb/s
0 Copy
First, a simple understanding of the operation of the file system, such as a program to send the contents of the file to the network
This program is working in user space, files and network sockets belong to hardware resources, there is a kernel space between the two
Inside the operating system, the entire process is:
After the Linux kernel2.2, a system called "0 copy (zero-copy)" Call mechanism, is to skip the "user buffer" copy, establish a direct mapping of disk space and memory, data is no longer copied to the "User state buffer"
System context switch is reduced to 2 times and can be boosted by one-time performance
File Segmentation
Kafka Queue topic is divided into multiple zones partition, each partition is divided into multiple segments segment, so a message in a queue is actually saved in n multiple fragment files
In a segmented way, each file operation is a small file operation, very light, but also increase the parallel processing power
Bulk Send
Kafka allows bulk sending of messages, first caching the messages in memory, and then sending them in batches at a time
For example, you can specify that a cached message is sent when it reaches a certain amount, or it is cached for a fixed time.
If 100 messages are sent, or sent every 5 seconds
This strategy will significantly reduce the number of I/O on the server
Data Compression
Kafka also supports compression of message collections, producer can compress a collection of messages in gzip or snappy format
The advantage of compression is to reduce the amount of data transmitted and reduce the pressure on the network transmission.
Producer compression, the consumer needs to be decompressed, although the work of the CPU increased, but on the big data processing, the bottleneck on the network rather than the CPU, so this cost is very worthwhile.
Kafka is how to achieve high throughput rates.