Flume configuration get information transferred to the Kafka cluster conf directory under new configuration file [[email protected] flume]# vim conf/file-monitor.conf# Statement agenta1.sources = r1a1.sinks = k1a1.channels = c1# Defining a data source a1.sources.r1.type = execa1.sources.r1.command = tail -f /data/ Xx.loga1.sources.r1.channels = c1# filter Filter a1.sources.r1.interceptors= I1a1.sources.r1.interceptors.i1.type=regex_filter#a1.sources.r1.interceptors.i1.regex= (Parsing events) (. * ) (END) a1.sources.r1.interceptors.i1.regex= (AAAA) (. *) #只有匹配到才传输到channels # Defines the event staging location, Can make memory, disk, database and so on a1.channels.c1.type = filea1.channels.c1.checkpointdir = /data/flume/ chka1.channels.c1.datadirs = /data/flume/data# Defining Data Flow kafka#a1.sinks.k1.type = loggera1.sinks.k1.type = org.apache.flume.sink.kafka.kafkasinka1.sinks.k1.brokerlist = 192.168.41.47:9092,192.168.41.127:9092,192.168.41.86:9092a1.sinks.k1.topic = mytopic#a1.sinks.k1.requiredacks = 1#a1.sinks.k1.batchsize = 20A1.SINKS.K1.SERIALIZER.CLASS=KAFKA.SERIALIZER.STRINGENCODERA1.SINKS.K1.CHANNEL = C1 Start [[email Protected] flume]# nohup bin/flume-ng agent -n a1 -c conf/ -f conf/file-monitor.conf -Dflume.root.logger=INFO,console > nohup.out 2> &1 &
Flume configuration using