In a complete large data processing system, in addition to the core of the Hdfs+mapreduce+hive composition Analysis system, data acquisition, result data export, task scheduling and other indispensable auxiliary systems are needed, and these auxiliary tools are There is a convenient open source framework in the Hadoop ecosystem.
Log capture framework Flume
Flume is a distributed, reliable, and highly available system for collecting, aggregating, and transmitting large volumes of logs.
Flume can collect files,socket packets and other forms of source data, but also can export the collected data to HDFS,hbase , Many external storage systems, such as Hive, Kafka queue , etc.
General acquisition requirements, through the simple configuration of the flume can be achieved
Flume also has good custom extensibility for special scenarios, soFlume can be used for most everyday data acquisition scenarios
Run Process
1, Flume Distributed system in the core role is agent,Flume collection system is by a agent are connected together to form
2, each agent is equivalent to a data transfer agent , the internal three components:
A) source: Acquisition source for docking with the data source for data acquisition
b) Sink: sink, collect data for the purpose of transmitting data to the next level agent or transfer data to the final storage system
c) Channel:angent Internal data transfer channel for passing data from source to sink
Flume supports numerous source and sink types
F
Installation deployment for Lume
1, Flume installation is very simple, only need to decompress, of course, if there is already a Hadoop Environment
Upload the installation package to the node on which the data source resides
Then unzip TAR-ZXVF apache-flume-1.6.0-bin.tar.gz
then enter The flume directory, modify the Conf under flume-env.sh, configure the java_home
2, according to the requirements of data acquisition configuration acquisition Scheme , described in the configuration file ( file name can be arbitrarily customized )
3 . Specify the acquisition scheme configuration file and start the flume agent on the corresponding node
Flume, Sqoop, Oozie