Using flume + kafka + storm to build a real-time log analysis system _ PHP Tutorial

Source: Internet
Author: User
Use flume + kafka + storm to build a real-time log analysis system. Using flume + kafka + storm to build a real-time log analysis system this article only involves the combination of flume and kafka. for the combination of kafka and storm, refer to other blogs 1. install and download flume install and use flume + kafka + storm to build a real-time log analysis system
This article only involves the combination of flume and kafka. for details about the combination of kafka and storm, refer to other blogs.
1. install and use flume
Download the flume installer http://www.apache.org/dyn/closer.cgi/flume/1.5.2/apache-flume-1.5.2-bin.tar.gz
Decompress $ tar-xzvf apache-flume-1.5.2-bin.tar.gz-C/opt/flume
Put the flume configuration file in the conf file directory and the execution file in the binfile directory.
1) configure flume
Go to the conf directory and copy the flume-conf.properties.template and name it as needed
$ Cp flume-conf.properties.template flume. conf
Modify the content of flume. conf. We use file sink to receive data in the channel. The channel uses memory channel and the source uses exec source. the configuration file is as follows:
 
 
  1. agent.sources = seqGenSrc
  2. agent.channels = memoryChannel
  3. agent.sinks = loggerSink
  4. # For each one of the sources, the type is defined
  5. agent.sources.seqGenSrc.type = exec
  6. agent.sources.seqGenSrc.command = tail -F /data/mongodata/mongo.log
  7. #agent.sources.seqGenSrc.bind = 172.168.49.130
  8. # The channel can be defined as follows.
  9. agent.sources.seqGenSrc.channels = memoryChannel
  10. # Each sink's type must be defined
  11. agent.sinks.loggerSink.type = file_roll
  12. agent.sinks.loggerSink.sink.directory = /data/flume
  13. #Specify the channel the sink should use
  14. agent.sinks.loggerSink.channel = memoryChannel
  15. # Each channel's type is defined.
  16. agent.channels.memoryChannel.type = memory
  17. # Other config values specific to each type of channel(sink or source)
  18. # can be defined as well
  19. # In this case, it specifies the capacity of the memory channel
  20. agent.channels.memoryChannel.capacity = 1000
  21. agent.channels.memory4log.transactionCapacity = 100
2) run the flume agent
Switch to the bin directory and run the following command:
$./Flume-ng agent -- conf ../conf-f ../conf/flume. conf -- n agent-Dflume. root. logger = INFO, console
You can view the generated log files in the/data/flume directory.

2. combine kafka
Because flume1.5.2 does not have a kafka sink, you need to develop your own kafka sink.
You can refer to the kafka sink in flume 1.6, but note that the kafka version is not compatible with some kafka APIs.
Here, only the core code and process () content are provided.

 
 
  1. Sink.Status status = Status.READY;

  2. Channel ch = getChannel();
  3. Transaction transaction = null;
  4. Event event = null;
  5. String eventTopic = null;
  6. String eventKey = null;

  7. try {
  8. transaction = ch.getTransaction();
  9. transaction.begin();
  10. messageList.clear();

  11. if (type.equals("sync")) {
  12. event = ch.take();

  13. if (event != null) {
  14. byte[] tempBody = event.getBody();
  15. String eventBody = new String(tempBody,"UTF-8");
  16. Map headers = event.getHeaders();

  17. if ((eventTopic = headers.get(TOPIC_HDR)) == null) {
  18. eventTopic = topic;
  19. }

  20. eventKey = headers.get(KEY_HDR);

  21. if (logger.isDebugEnabled()) {
  22. logger.debug("{Event} " + eventTopic + " : " + eventKey + " : "
  23. + eventBody);
  24. }

  25. ProducerData data = new ProducerData
  26. (eventTopic, new Message(tempBody));

  27. long startTime = System.nanoTime();
  28. logger.debug(eventTopic+"++++"+eventBody);
  29. producer.send(data);
  30. long endTime = System.nanoTime();
  31. }
  32. } else {
  33. long processedEvents = 0;
  34. for (; processedEvents < batchSize; processedEvents += 1) {
  35. event = ch.take();

  36. if (event == null) {
  37. break;
  38. }

  39. byte[] tempBody = event.getBody();
  40. String eventBody = new String(tempBody,"UTF-8");
  41. Map headers = event.getHeaders();

  42. if ((eventTopic = headers.get(TOPIC_HDR)) == null) {
  43. eventTopic = topic;
  44. }

  45. eventKey = headers.get(KEY_HDR);

  46. if (logger.isDebugEnabled()) {
  47. logger.debug("{Event} " + eventTopic + " : " + eventKey + " : "
  48. + eventBody);
  49. logger.debug("event #{}", processedEvents);
  50. }

  51. // create a message and add to buffer
  52. ProducerData data = new ProducerData
  53. (eventTopic, eventBody);
  54. messageList.add(data);
  55. }

  56. // publish batch and commit.
  57. if (processedEvents > 0) {
  58. long startTime = System.nanoTime();
  59. long endTime = System.nanoTime();
  60. }
  61. }

  62. transaction.commit();
  63. } catch (Exception ex) {
  64. String errorMsg = "Failed to publish events";
  65. logger.error("Failed to publish events", ex);
  66. status = Status.BACKOFF;
  67. if (transaction != null) {
  68. try {
  69. transaction.rollback();
  70. } catch (Exception e) {
  71. logger.error("Transaction rollback failed", e);
  72. throw Throwables.propagate(e);
  73. }
  74. }
  75. throw new EventDeliveryException(errorMsg, ex);
  76. } finally {
  77. if (transaction != null) {
  78. transaction.close();
  79. }
  80. }

  81. return status;
Next, modify the flume configuration file and change the sink configuration to kafka sink, for example:

 
 
  1. producer.sinks.r.type = org.apache.flume.sink.kafka.KafkaSink
  2. producer.sinks.r.brokerList = bigdata-node00:9092
  3. producer.sinks.r.requiredAcks = 1
  4. producer.sinks.r.batchSize = 100
  5. #producer.sinks.r.kafka.producer.type=async
  6. #producer.sinks.r.kafka.customer.encoding=UTF-8
  7. producer.sinks.r.topic = testFlume1
Type indicates the complete path of kafkasink.
The following parameters are a series of kafka parameters. The most important parameters are brokerList and topic parameters.

Now restart flume to view the corresponding logs under the corresponding topic of kafka.

This article only covers the combination of flume and kafka. for the combination of kafka and storm, refer to other blogs 1. install and use flume to download flume installation...

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.