Installation and deployment of 02_ Flume

Source: Internet
Author: User

I. Installation deployment of Flume:

 Flume installation is very simple, only need to decompress, of course, if there is already a Hadoop environment

 The installation package Is: http://www-us.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz

 1. Upload the installation package to the node where the data source resides;

 2. then unzip:tar-zxvf apache-flume-1.6.0-bin.tar.gz-c/usr/local/src/

3.  then enter the flume directory, modify the flume-env.sh under conf, and configure the Java_home

4. According to the requirements of data acquisition configuration acquisition Scheme , described in the configuration file (the file name can be arbitrarily customized)

 5. Specify the acquisition scheme configuration file and start the Flume agent on the appropriate node

Two. Examples of use:

Use flume to implement a case that collects data from the socket network port and then sinks to the logger log (source data: network port)

1. Configure the Capture scheme: write a netcat-logger.conf file below the flume conf directory, as Follows:

# Name the components on thisagent:A1: represents the name of the agent  #给三个组件取个名字a1. sources=r1a1.sinks=K1a1.channels=c1# Describe/Configure the source# type, receives data from the network port, starts in the native, so the type=Spooldir Collection Directory source, in the catalogue there is the mining a1.sources.r1.type= netcata1.sources.r1.bind= shizhan2a1.sources.r1.port=44444# Describe The Sinka1.sinks.k1.type=logger# use a channel which buffers events in memory# sink is a batch of time, sinking is a event,channel parameter explanation:# Capacity: the maximum number of event events that can be stored in the channel by default #trasactionCapacity: The maximum number of event times that can be taken from source or sent to sink A1.channels.c1.type=memorya1.channels.c1.capacity= 1000a1.channels.c1.transactionCapacity= 100# Bind The source and sink to the Channela1.sources.r1.channels=C1a1.sinks.k1.channel= C1

2. Start flume with the following command: start Agent to collect data

#bin/flume-ng agent --c conf -f conf/netcat-logger.conf --n a1 -Dflume.root.logger=INFO,console

    -c conf: Specifies the directory where the configuration files of the flume itself

-f Conf/netcat-logger.conf: Specify the acquisition scheme we describe

-n A1: Specify the name of our agent

3. Incoming Data: execute the following command on another machine:  First , the agent collects the listening port to send data, so that the agent has data to be collected

  

Then there is data output at the other end of the boot Flume.

  

  

 

Installation and deployment of 02_ Flume

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.