Flume acquisition Process:
#说明: The case is Flume listening directory/home/hadoop/flume_kafka acquisition to Kafka;
Start the cluster
Start Kafka,
Start the agent,
Flume-ng agent-c. -f/home/hadoop/flume-1.7.0/conf/myconf/flume-kafka.conf-n A1-dflume.root.logger=info,console
Open Consumer
kafka-console-consumer.sh--zookeeper hdp-qm-01:2181--from-beginning--topic mytopic
Production data to Kafka
Data Catalog:
Vi/home/hadoop/flume_hbase/word.txt
12345623434
Configuration file
VI flume-kafka.conf
#Name the components in this agent
A1.sources = R1
A1.sinks = K1
A1.channels = C1
#Describe/configure The source
A1.sources.r1.type = Spooldir
A1.sources.r1.spooldir=/home/hadoop/flume_kafka
# Describe The sink
A1.sinks.k1.type = Org.apache.flume.sink.kafka.KafkaSink
A1.sinks.k1.kafka.topic = Mytopic
A1.sinks.k1.kafka.bootstrap.servers = hdp-qm-01:9092
A1.sinks.k1.kafka.flumeBatchSize = 20
A1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
A1.sinks.ki.kafka.producer.compression.type = Snappy
# Use a channel which buffers events in memory
A1.channels.c1.type = Memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
A1.sources.r1.channels = C1
A1.sinks.k1.channel = C1
Use flume to sink data to Kafka