Use flume + kafka + storm to build a real-time log analysis system. Using flume + kafka + storm to build a real-time log analysis system this article only involves the combination of flume and kafka. for the combination of kafka and storm, refer to other blogs 1. install and download flume install and use flume + kafka + storm to build a real-time log analysis system
This article only involves the combination of flume and kafka. for details about the combination of kafka and storm, refer to other blogs.
1. install and use flume
Download the flume installer http://www.apache.org/dyn/closer.cgi/flume/1.5.2/apache-flume-1.5.2-bin.tar.gz
Decompress $ tar-xzvf apache-flume-1.5.2-bin.tar.gz-C/opt/flume
Put the flume configuration file in the conf file directory and the execution file in the binfile directory.
1) configure flume
Go to the conf directory and copy the flume-conf.properties.template and name it as needed
$ Cp flume-conf.properties.template flume. conf
Modify the content of flume. conf. We use file sink to receive data in the channel. The channel uses memory channel and the source uses exec source. the configuration file is as follows:
- agent.sources = seqGenSrc
- agent.channels = memoryChannel
- agent.sinks = loggerSink
- # For each one of the sources, the type is defined
- agent.sources.seqGenSrc.type = exec
- agent.sources.seqGenSrc.command = tail -F /data/mongodata/mongo.log
- #agent.sources.seqGenSrc.bind = 172.168.49.130
- # The channel can be defined as follows.
- agent.sources.seqGenSrc.channels = memoryChannel
- # Each sink's type must be defined
- agent.sinks.loggerSink.type = file_roll
- agent.sinks.loggerSink.sink.directory = /data/flume
- #Specify the channel the sink should use
- agent.sinks.loggerSink.channel = memoryChannel
- # Each channel's type is defined.
- agent.channels.memoryChannel.type = memory
- # Other config values specific to each type of channel(sink or source)
- # can be defined as well
- # In this case, it specifies the capacity of the memory channel
- agent.channels.memoryChannel.capacity = 1000
- agent.channels.memory4log.transactionCapacity = 100
2) run the flume agent
Switch to the bin directory and run the following command:
$./Flume-ng agent -- conf ../conf-f ../conf/flume. conf -- n agent-Dflume. root. logger = INFO, console
You can view the generated log files in the/data/flume directory.
2. combine kafka
Because flume1.5.2 does not have a kafka sink, you need to develop your own kafka sink.
You can refer to the kafka sink in flume 1.6, but note that the kafka version is not compatible with some kafka APIs.
Here, only the core code and process () content are provided.
- Sink.Status status = Status.READY;
- Channel ch = getChannel();
- Transaction transaction = null;
- Event event = null;
- String eventTopic = null;
- String eventKey = null;
- try {
- transaction = ch.getTransaction();
- transaction.begin();
- messageList.clear();
- if (type.equals("sync")) {
- event = ch.take();
- if (event != null) {
- byte[] tempBody = event.getBody();
- String eventBody = new String(tempBody,"UTF-8");
- Map headers = event.getHeaders();
- if ((eventTopic = headers.get(TOPIC_HDR)) == null) {
- eventTopic = topic;
- }
- eventKey = headers.get(KEY_HDR);
- if (logger.isDebugEnabled()) {
- logger.debug("{Event} " + eventTopic + " : " + eventKey + " : "
- + eventBody);
- }
-
- ProducerData data = new ProducerData
- (eventTopic, new Message(tempBody));
-
- long startTime = System.nanoTime();
- logger.debug(eventTopic+"++++"+eventBody);
- producer.send(data);
- long endTime = System.nanoTime();
- }
- } else {
- long processedEvents = 0;
- for (; processedEvents < batchSize; processedEvents += 1) {
- event = ch.take();
- if (event == null) {
- break;
- }
- byte[] tempBody = event.getBody();
- String eventBody = new String(tempBody,"UTF-8");
- Map headers = event.getHeaders();
- if ((eventTopic = headers.get(TOPIC_HDR)) == null) {
- eventTopic = topic;
- }
- eventKey = headers.get(KEY_HDR);
- if (logger.isDebugEnabled()) {
- logger.debug("{Event} " + eventTopic + " : " + eventKey + " : "
- + eventBody);
- logger.debug("event #{}", processedEvents);
- }
- // create a message and add to buffer
- ProducerData data = new ProducerData
- (eventTopic, eventBody);
- messageList.add(data);
- }
- // publish batch and commit.
- if (processedEvents > 0) {
- long startTime = System.nanoTime();
- long endTime = System.nanoTime();
- }
- }
- transaction.commit();
- } catch (Exception ex) {
- String errorMsg = "Failed to publish events";
- logger.error("Failed to publish events", ex);
- status = Status.BACKOFF;
- if (transaction != null) {
- try {
- transaction.rollback();
- } catch (Exception e) {
- logger.error("Transaction rollback failed", e);
- throw Throwables.propagate(e);
- }
- }
- throw new EventDeliveryException(errorMsg, ex);
- } finally {
- if (transaction != null) {
- transaction.close();
- }
- }
- return status;
Next, modify the flume configuration file and change the sink configuration to kafka sink, for example:
- producer.sinks.r.type = org.apache.flume.sink.kafka.KafkaSink
- producer.sinks.r.brokerList = bigdata-node00:9092
- producer.sinks.r.requiredAcks = 1
- producer.sinks.r.batchSize = 100
- #producer.sinks.r.kafka.producer.type=async
- #producer.sinks.r.kafka.customer.encoding=UTF-8
- producer.sinks.r.topic = testFlume1
Type indicates the complete path of kafkasink.
The following parameters are a series of kafka parameters. The most important parameters are brokerList and topic parameters.
Now restart flume to view the corresponding logs under the corresponding topic of kafka.
This article only covers the combination of flume and kafka. for the combination of kafka and storm, refer to other blogs 1. install and use flume to download flume installation...