Tag: Connect a storage span through the self-starter installation package Strong
Overview
- Flume is a distributed, reliable, and highly available system for collecting, aggregating, and transmitting large volumes of logs.
- Flume can collect files,socket packets and other forms of source data, but also can export the collected data to HDFS,hbase , Many external storage systems such as Hive, Kafka, etc.
- General acquisition requirements, through the simple configuration of the flume can be achieved
- Flume also has good custom extensibility for special scenarios, soFlume can be used for most everyday data acquisition scenarios
operating mechanism
1, Flume Distributed system in the core role is agent,Flume collection system is by a agent are connected together to form
2, each agent is equivalent to a data transfer agent , the internal three components:
A) source: Acquisition source for docking with the data source for data acquisition
b) Sink: sink, collect data for the purpose of transmitting data to the next level agent or transfer data to the final storage system
c) Channel:angent Internal data transfer channel for passing data from source to sink
Note: The data passed from source to Channel to Sink is in the form of event events; Event an event is a data flow unit.
F
Lume collection system structure diagram
1. Simple Structure
Single agent collects data
Complex Structure
Tandem between multi-level agents
Flume Practical Cases
F
Installation deployment for Lume
1, Flume installation is very simple, only need to decompress, of course, if there is already a Hadoop Environment
Upload the installation package to the node on which the data source resides
Then unzip TAR-ZXVF apache-flume-1.6.0-bin.tar.gz
then enter The flume directory, modify the Conf under flume-env.sh, configure the java_home
2, according to the requirements of data acquisition configuration acquisition Scheme , described in the configuration file ( file name can be arbitrarily customized )
3 . Specify the acquisition scheme configuration file and start the flume agent on the corresponding node
Let's start with a simple example to test if the program environment is normal.
1, first create a new file in the conf directory of Flume
VI netcat-logger.conf
# define the names of the components in this agent a1.sources = R1a1.sinks = K1a1.channels = c1# Describes and configures the source component: R1a1.sources.r1.type = NETCATA1.SOURCES.R1 . bind = Localhosta1.sources.r1.port = 44444# describes and configures the sink component: K1a1.sinks.k1.type = logger# describes and configures the channel component, Here is how the memory cache is used A1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactioncapacity = 100# Describes and configures the connection relationship between the source channel sink a1.sources.r1.channels = C1a1.sinks.k1.channel = C1
1, start the agent to collect data
Bin/flume-ng agent-c conf-f conf/netcat-logger.conf-n a1-dflume.root.logger=info,console |
-C conf specifies The directory where the configuration files of the flume itself
-F Conf/netcat-logger.con Specify the acquisition scheme we describe
-N A1 Specify the name of our agent
1. Testing
first , the agent collects the listening port to send data, so that the agent has data to be collected
on a machine that can be networked with agent nodes.
Telnet anget-hostname Port (telnet localhost 44444)
Log Capture Framework Flume