Use flume + kafka + storm to build a real-time Log Analysis System
This article only involves the combination of flume and kafka. For details about the combination of kafka and storm, refer to other articles.
Kafka-Storm integrated deployment
1. install and use flume
Download the flume installer http://www.apache.org/dyn/closer.cgi/flume/1.5.2/apache-flume-1.5.2-bin.tar.gz
Decompress $ tar-xzvf apache-flume-1.5.2-bin.tar.gz-C/opt/flume
Put the flume configuration file in the conf file directory and the execution file in the binfile directory.
1) Configure flume
Go to the conf directory and copy the flume-conf.properties.template and name it as needed
$ Cp flume-conf.properties.template flume. conf
Modify the content of flume. conf. We use file sink to receive data in the channel. The channel uses memory channel and the source uses exec source. The configuration file is as follows:
Agent. sources = seqGenSrc
Agent. channels = memoryChannel
Agent. sinks = loggerSink
# For each one of the sources, the type is defined
Agent. sources. seqGenSrc. type = exec
Agent. sources. seqGenSrc. command = tail-F/data/parse data/mongo. log
# Agent. sources. seqGenSrc. bind = 172.1649.130
# The channel can be defined as follows.
Agent. sources. seqGenSrc. channels = memoryChannel
# Each sink's type must be defined
Agent. sinks. loggerSink. type = file_roll
Agent. sinks. loggerSink. sink. directory =/data/flume
# Specify the channel the sink shoshould use
Agent. sinks. loggerSink. channel = memoryChannel
# Each channel's type is defined.
Agent. channels. memoryChannel. type = memory
# Other config values specific to each type of channel (sink or source)
# Can be defined as well
# In this case, it specifies the capacity of the memory channel
Agent. channels. memoryChannel. capacity = 1000
Agent. channels. memory4log. transactionCapacity = 100
2) run the flume agent
Switch to the bin directory and run the following command:
$./Flume-ng agent -- conf ../conf-f ../conf/flume. conf -- n agent-Dflume. root. logger = INFO, console
You can view the generated log files in the/data/flume directory.
2. Combine kafka
Because flume1.5.2 does not have a kafka sink, you need to develop your own kafka sink.
You can refer to the kafka sink in flume 1.6, but note that the kafka version is not compatible with some kafka APIs.
Here, only the core code and process () content are provided.
Sink. Status status = Status. READY;
Channel ch = getChannel ();
Transaction transaction = null;
Event event = null;
String eventTopic = null;
String eventKey = null;
Try {
Transaction = ch. getTransaction ();
Transaction. begin ();
MessageList. clear ();
If (type. equals ("sync ")){
Event = ch. take ();
If (event! = Null ){
Byte [] tempBody = event. getBody ();
String eventBody = new String (tempBody, "UTF-8 ");
Map <String, String> headers = event. getHeaders ();
If (eventTopic = headers. get (TOPIC_HDR) = null ){
EventTopic = topic;
}
EventKey = headers. get (KEY_HDR );
If (logger. isDebugEnabled ()){
Logger. debug ("{Event}" + eventTopic + ":" + eventKey + ":"
+ EventBody );
}
ProducerData <String, Message> data = new ProducerData <String, Message>
(EventTopic, new Message (tempBody ));
Long startTime = System. nanoTime ();
Logger. debug (eventTopic + "++" + eventBody );
Producer. send (data );
Long endTime = System. nanoTime ();
}
} Else {
Long processedEvents = 0;
For (; processedEvents <batchSize; processedEvents ++ = 1 ){
Event = ch. take ();
If (event = null ){
Break;
}
Byte [] tempBody = event. getBody ();
String eventBody = new String (tempBody, "UTF-8 ");
Map <String, String> headers = event. getHeaders ();
If (eventTopic = headers. get (TOPIC_HDR) = null ){
EventTopic = topic;
}
EventKey = headers. get (KEY_HDR );
If (logger. isDebugEnabled ()){
Logger. debug ("{Event}" + eventTopic + ":" + eventKey + ":"
+ EventBody );
Logger. debug ("event # {}", processedEvents );
}
// Create a message and add to buffer
ProducerData <String, String> data = new ProducerData <String, String>
(EventTopic, eventBody );
MessageList. add (data );
}
// Publish batch and commit.
If (processedEvents> 0 ){
Long startTime = System. nanoTime ();
Long endTime = System. nanoTime ();
}
}
Transaction. commit ();
} Catch (Exception ex ){
String errorMsg = "Failed to publish events ";
Logger. error ("Failed to publish events", ex );
Status = Status. BACKOFF;
If (transaction! = Null ){
Try {
Transaction. rollback ();
} Catch (Exception e ){
Logger. error ("Transaction rollback failed", e );
Throw Throwables. propagate (e );
}
}
Throw new EventDeliveryException (errorMsg, ex );
} Finally {
If (transaction! = Null ){
Transaction. close ();
}
}
Return status;
Next, modify the flume configuration file and change the sink configuration to kafka sink, for example:
Producer. sinks. r. type = org. apache. flume. sink. kafka. KafkaSink
Producer. sinks. r. brokerList = bigdata-node00: 9092
Producer. sinks. r. requiredAcks = 1
Producer. sinks. r. batchSize = 100
# Producer. sinks. r. kafka. producer. type = async
# Producer. sinks. r. kafka. customer. encoding = UTF-8
Producer. sinks. r. topic = testFlume1
Type indicates the complete path of kafkasink.
The following parameters are a series of kafka parameters. The most important parameters are brokerList and topic parameters.
Now restart flume to view the corresponding logs under the corresponding topic of kafka.
Kafka architecture design of the distributed publish/subscribe message system
Apache Kafka code example
Apache Kafka tutorial notes
Principles and features of Apache kafka (0.8 V)
Kafka deployment and code instance
Introduction to Kafka and establishment of Cluster Environment
For details about Kafka, click here
Kafka: click here
This article permanently updates the link address: