Acquisition Layer can be used mainly Flume, Kafka two kinds of technology.
Flume:Flume is a pipeline flow method that provides a number of default implementations that allow users to deploy through parameters and extend the API.
Kafka:Kafka is a durable, distributed message queue.
The Kafka is a very versatile system. You can have many producers and many consumers sharing multiple theme Topics. By contrast , Flume is a dedicated tool designed to send data to Hdfs,hbase. It has special optimizations for HDFS and integrates the security features of Hadoop. Therefore,Cloudera recommends using Kafka if the data is consumed by multiple systems , and Flumeif the data is designed for use with Hadoop.
as you all know , The Flume contains many source and sink components. However,Kafka obviously has a smaller production consumer ecosystem, and Kafka Community support is not good. Hopefully this will be improved in the future, but for now: using Kafka means you're ready to write your own producer and consumer code. If the existing Flume Sources and Sinks meet your needs, and you prefer a system that does not require any development, use Flume.
Kafka and Flume are reliable systems that guarantee 0 data loss through proper configuration. However,Flume does not support replica events. Then, if a node of the Flume agent crashes, you will lose these events even if you use a reliable file-pipeline method until you recover the disks. If you need a high-reliability line of piping, then using Kafka is a better choice.
Flume and the Kafka can be used together. Typically , Flume + Kafka is used. In fact, if in order to take advantage of Flume existing write HDFS function, you can also use Kafka + Flume way.
"Acquisition Layer" Kafka and Flume how to choose