flume-1.6.0 High-availability test && data into Kafka

Source: Internet
Author: User

Machine list:

192.168.137.115 slave0 (Agent) 192.168.137.116 slave1 (agent) 192.168.137.117 slave2 (agent) 192.168.137.11 8 Slave3 (collector) 192.168.137.119 Slave4 (collector)


Create a directory on each machine

Mkdir-p/home/qun/data/flume/logs

Mkdir-p/home/qun/data/flume/data

Mkdir-p/home/qun/data/flume/checkpoint


Download the latest flume package:

wget TAR-ZXVF apache-flume-1.6.0-bin.tar.gz


In Slave3,slave4 Configuration Collectors

Touch $FLUME _home/conf/server.conf

The contents are as follows

a1.sources = R1a1.channels = C1a1.sinks = K1#set Channela1.channels.c1.type = filea1.channels.c1.checkpointdir=/home/ qun/data/flume/checkpointa1.channels.c1.datadirs=/home/qun/data/flume/data# other Node,nna to Nnsa1.sources.r1.type = Avroa1.sources.r1.bind = Slave3a1.sources.r1.port = 52020a1.sources.r1.interceptors = I1a1.sources.r1.interceptors.i1.type = Statica1.sources.r1.interceptors.i1.key = Collectora1.sources.r1.interceptors.i1.value = SLAVE3a1.sources.r1.channels = C1#set sink to kafkaa1.sinks.k1.type= Org.apache.flume.sink.kafka.kafkasinka1.sinks.k1.topic=mytopica1.sinks.k1.brokerlist=kafkahost : 9092A1.SINKS.K1.REQUIREDACKS=1A1.SINKS.K1.BATCHSIZE=100A1.SINKS.K1.CHANNEL=C1



Configuring the agent in Slave0,slave1,slave2

Touch $FLUME _home/conf/client.conf

The contents are as follows

agent1.channels = c1agent1.sources = r1agent1.sinks = k1 k2#set  gruopagent1.sinkgroups = g1  #set  channelagent1.channels.c1.type =  Fileagent1.channels.c1.checkpointdir=/home/qun/data/flume/checkpointagent1.channels.c1.datadirs=/home/qun/data /flume/dataagent1.sources.r1.channels = c1agent1.sources.r1.type =  spooldiragent1.sources.r1.spooldir=/home/qun/data/flume/logsagent1.sources.r1.fileheader =  falseagent1.sources.r1.interceptors = i1 i2agent1.sources.r1.interceptors.i1.type =  staticagent1.sources.r1.interceptors.i1.key = typeagent1.sources.r1.interceptors.i1.value =  LOGINagent1.sources.r1.interceptors.i2.type = timestamp# set  sink1agent1.sinks.k1.channel = c1agent1.sinks.k1.type = avroagent1.sinks.k1.hostname  = slave3agent1.sinks.k1.port = 52020# set sink2agent1.sinks.k2.channel = c1agent1.sinks.k2.type = avroagent1.sinks.k2.hostname =  Slave4agent1.sinks.k2.port = 52020#set sink groupagent1.sinkgroups.g1.sinks = k1  k2#set failoveragent1.sinkgroups.g1.processor.type =  Failoveragent1.sinkgroups.g1.processor.priority.k1 = 10agent1.sinkgroups.g1.processor.priority.k2  = 1agent1.sinkgroups.g1.processor.maxpenalty = 10000


Start collecters on the Slave3,slave4

Flume-ng agent-n a1-c conf-f/home/qun/apache-flume-1.6.0-bin/conf/server.conf-dflume.root.logger=debug,console


Start the agent on Slave0,slave1,slave2

Flume-ng agent-n agent1-c conf-f/home/qun/apache-flume-1.6.0-bin/conf/client.conf-dflume.root.logger=debug, Console


Test function


echo  "Hello flume" >>/home/qun/data/flume/logs/test.txtcollector slave3  Log received by Agent: 16/05/26 12:44:24 info file. eventqueuebackingstorefile: start checkpoint for /home/qun/data/flume/checkpoint/ Checkpoint, elements to sync = 216/05/26 12:44:24 info file. eventqueuebackingstorefile: updating checkpoint metadata: logwriteorderid:  1464235734894, queuesize: 0, queuehead: 016/05/26 12:44:24 info file. Log: updated checkpoint for file: /home/qun/data/flume/data/log-3 position:  786 logwriteorderid: 146423573489416/05/26 12:44:24 info file. Log: removing old file: /home/qun/data/flume/data/log-116/05/26 12:44:24 info  file. log: removing old file: /home/qun/data/flume/data/log-1.meta16/05/26 12:44:54  Info file. EvenTqueuebackingstorefile: start checkpoint for /home/qun/data/flume/checkpoint/checkpoint,  elements to sync = 216/05/26 12:44:54 info file. eventqueuebackingstorefile: updating checkpoint metadata: logwriteorderid:  1464235734901, queuesize: 0, queuehead: 016/05/26 12:44:54 info file. Log: updated checkpoint for file: /home/qun/data/flume/data/log-3 position:  1179 logwriteorderid: 1464235734901



Test Collecters Failover

Kill Slave3 's flume process, kill-9 PID


echo  "Hello flume" >>/home/qun/data/flume/logs/test.txtcollector slave4  Log received by Agent: 16/05/26 12:08:27 info file. eventqueuebackingstorefile: start checkpoint for /home/qun/data/flume/checkpoint/ Checkpoint, elements to sync = 216/05/26 12:08:27 info file. eventqueuebackingstorefile: updating checkpoint metadata: logwriteorderid:  1464234987484, queuesize: 0, queuehead: 016/05/26 12:08:27 info file. Log: updated checkpoint for file: /home/qun/data/flume/data/log-3 position:  393 logwriteorderid: 146423498748416/05/26 12:08:27 info file. Logfile: closing randomreader /home/qun/data/flume/data/log-116/05/26 12:54:38 info  client. Clientutils$: fetching metadata from broker id:0,host:xiaobin,port:9092 with  correlation&Nbsp;id 4 for 1 topic (s)  set (mytopic) 16/05/26 12:54:38 info producer. Syncproducer: connected to xiaobin:9092 for producing16/05/26 12:54:38 info  producer. Syncproducer: disconnecting from xiaobin:909216/05/26 12:54:38 info producer. Syncproducer: disconnecting from xiaobin:909216/05/26 12:54:38 info producer. Syncproducer: connected to xiaobin:9092 for producing16/05/26 12:54:57 info  file. eventqueuebackingstorefile: start checkpoint for /home/qun/data/flume/checkpoint/ Checkpoint, elements to sync = 216/05/26 12:54:57 info file. eventqueuebackingstorefile: updating checkpoint metadata: logwriteorderid:  1464234987491, queuesize: 0, queuehead: 016/05/26 12:54:57 info file. Log: updated cheCkpoint for file: /home/qun/data/flume/data/log-3 position: 786 logwriteorderid:  146423498749116/05/26 12:54:57 info file. Log: removing old file: /home/qun/data/flume/data/log-116/05/26 12:54:57 info  file. Log: removing old file: /home/qun/data/flume/data/log-1.meta


I'll write it in a minute.


flume-1.6.0 High-availability test && data into Kafka

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.