Flume-based Log collection system (i) Architecture and design

Source: Internet
Author: User
Tags tmp file

Questions Guide:

1.flume-ng and Scribe, where is the advantage of Flume-ng?
2. What issues should be considered in architecture design considerations?
3.Agent How can I fix it?
Does 4.Collector panic have an impact?
What are the measures for 5.flume-ng reliability (reliability)?



The U.S. mission's log collection system is responsible for the collection of all business logs from the company and provides real-time data streams to the Hadoop platform for offline data and storm platforms. The American mission's log collection system is based on flume design and construction. "Flume-based Log collection system" will be divided into two parts for readers to present the United States of America's log Collection system architecture design and combat experience. The first part of the architecture and design will focus primarily on the overall architecture design of the log collection system, and why such a design should be done. The second part of the improvement and optimization, will mainly focus on the actual deployment and use of the problems encountered in the process of flume do function modification and optimization.

1st Log Collection System Introduction

Log collection is the cornerstone of big data. Many companies ' business platforms generate large amounts of log data every day. Collecting business log data for offline and online analysis systems is exactly what the Log collection system does. High availability, high reliability and scalability are the basic features of the log collection system. At present, the common open source log collection system has flume, scribe and so on. Flume is a highly available, highly reliable, distributed, massive log collection, aggregation, and transmission system from Cloudera, which is now a sub-project of Apache. scribe is Facebook's Open source Log collection system, which provides a scalable, high-fault-tolerant, simple solution for distributed collection of logs and unified processing.

2 common open Source Log collection system comparison

The following is a comparison of the various aspects of the common open source log collection system flume and scribe. In contrast, Flume will mainly use the Flume-ng under Apache as the Reference object. At the same time, we divide the common log collection system into three layers (agent layer, collector layer and store layer) to compare.

[TD]

Compare items Flume-ng Scribe
Use language Java C + +
Fault tolerance Between agent and collector, both collector and store are fault tolerant, and provide three levels of reliability assurance; Between agent and collector, there is fault tolerance between collector and store;
Load Balancing Between agent and collector, there are two modes of loadbalance and failover between collector and store No
Scalability Good Good
Agent richness level Provides a rich agent, including Avro/thrift sockets, text, tail, etc. Mainly the thrift port
Store richness Can directly write HDFs, text, console, TCP, write HDFs support for text and sequence compression; Provides buffer, network, file (HDFs, text), etc.
Code structure Good system framework, clear module, easy to develop Simple code

3 US Mission log Collection system architecture

The U.S. mission's log collection system is responsible for the collection of all business logs from the company and provides real-time data streams to the Hadoop platform for offline data and storm platforms. The American mission's log collection system is based on flume design and construction. Currently, about T-level log data is collected and processed daily. It is the overall frame chart of the American Regiment's log collection system.
A. The entire system is divided into three tiers: the agent layer, the collector layer, and the store layer. Where the agent layer of each machine to deploy a process, responsible for the log collection work of the stand-alone; The collector layer is deployed on the hub server and is responsible for receiving logs sent by the agent layer and writing the logs to the appropriate store layer according to the routing rules The store layer is responsible for providing permanent or temporary log storage services, or directing log streams to other servers. B. Agent to collector uses loadbalance policy to send all logs evenly to all collector, achieving load balancing targets, and handling individual collector failures. C. The main objectives of the collector layer are three: Sinkhdfs, Sinkkafka and Sinkbypass. Provide offline data to HDFs separately, and provide real-time log streams to Kafka and bypass. Among them, Sinkhdfs is divided into sinkhdfs_b,sinkhdfs_m and sinkhdfs_s three sink according to the size of the log, in order to improve the performance of writing to HDFs, see the following introduction. D. For the store, HDFs is responsible for permanently storing all logs, Kafka stores the latest 7-day logs and provides real-time log streams to the storm system, and bypass is responsible for providing real-time log streams to other servers and applications. It is the module decomposition diagram of the American Regiment's log Collection system, which details the relationship between the source, channel and sink in the agent, collector and bypass.
A. Module naming rules: All source starts with SRC, all channel begins with CH, all sink begin with sink; b. Channel Unified use of the Dualchannel developed by the United States, the specific reasons are detailed later, for the filtered log use Nullchannel, the specific reasons are detailed; C. The internal communication between the modules uses the Avro interface uniformly;

4 Architectural Design Considerations

The following is a detailed analysis of the above architectures from the aspects of usability, reliability, scalability, and compatibility.

4.1 Availability (availablity)

For a log collection system, availability (availablity) refers to the total time of the system's failure-free operation during a fixed period. To improve the usability of the system, we need to eliminate the single point of the system and improve the redundancy of the system. Let's take a look at the usability considerations of the Log collection system for the American mission.

4.1.1 Agent dead.

Agent dead is divided into two situations: the machine crashes or the agent process dies. In the case of machine crashes, the process of generating the log will also die, so no new logs will be generated and no service is available. For cases where the agent process is dead, it does reduce the availability of the system. In this case, we have the following three ways to improve the usability of the system. First, all agents are started in a supervise manner, and if the process dies it is immediately restarted to provide the service. Second, all agents to survive monitoring, found that the agent died immediately alarm. Finally, for very important logs, it is recommended that the application directly writes the log to disk, and the agent uses Spooldir to obtain the latest logs.

4.1.2 Collector dead.

Because the Hub server provides a peer and non-differentiated service, and the agent accesses collector does a loadbalance and retry mechanism. So when a collector is unable to provide the service, the agent's retry policy sends the data to the other available collector. So the entire service is unaffected.

4.1.3 HDFs normal shutdown

We provide a switch option in the Collector Hdfssink, which controls collector to stop writing HDFs and caches all events to the FileChannel function.

4.1.4 HDFs is abnormally down or inaccessible

If HDFs is abnormally shut down or inaccessible, collector cannot write HDFs at this time. As we use Dualchannel,collector, we can cache the received events to FileChannel, save on disk, and continue to serve. When HDFs resumes service, the events cached in FileChannel are then sent to HDFs. This mechanism is similar to scribe and can provide better fault tolerance.

4.1.5 collector slows down or agent/collector network slows down

If collector processing slows down (for example, machine load is too high) or the network between agent/collector slows down, it can cause the agent to send to collector slower. Similarly, for this scenario, we use dualchannel,agent on the agent side to cache the received events to FileChannel, save on disk, and continue to serve. When collector resumes service, the events cached in FileChannel are then sent to collector.

4.1.6 HDFs slows

When there are more tasks on Hadoop and there is a lot of read and write operations, the read and write data of HDFs often becomes very slow. This is a very common situation because of the peak usage period every week. We also use Dualchannel to solve the problem of hdfs slowing down. When HDFs writes are faster, all events pass data only through Memchannel, reducing disk IO for higher performance. When HDFs writes are slow, all events pass through FileChannel data and have a large data cache space.

4.2 Reliability (Reliability)

For the Log collection system, the reliability (reliability) refers to the flume in the transmission process of the data stream, to ensure the reliable transfer of events. For flume, all events are stored in the agent's channel and then sent to the next agent in the data flow or to the final storage service. So when did events in the channel of an agent be deleted? When and only if they are saved to the channel of the next agent or are saved to the final storage service. This is the most basic single-hop message-passing semantics that Flume provides to the point-and-dot reliability guarantee in the data stream. So how does flume do the most basic message-passing semantics? First, the exchange of transactions between agents. Flume uses transactions to ensure the reliable delivery of the event. The source and sink are encapsulated in transactions, which are provided by the storage that holds the event or by the channel. This ensures that the event is reliable in point-to-point transmission of the data stream. In multilevel data streams, such as the sink at the upper level and the next level of source are included in the transaction, ensuring that the data is reliably transferred from one channel to another channel.
Second, the persistence of the channel in the data stream. Memorychannel in Flume is likely to lose data (when the agent dies), and FileChannel is persistent, providing a mysql-like logging mechanism to ensure that data is not lost.

4.3 Extensibility (scalability)

For a log collection system, extensibility (scalability) refers to the ability of the system to scale linearly. When the log volume increases, the system can simply increase the machine to achieve the purpose of linear expansion. For Flume-based log collection systems, it is necessary to provide services linearly in every layer of design. The extensibility of each layer is described below.

4.3.1 Agent Layer

For the agent layer, each machine deploys an agent that can be scaled horizontally and unrestricted. One aspect, the agent's ability to collect logs is limited by the performance of the machine, and under normal circumstances an agent can provide sufficient service for a single computer. On the other hand, if the machines are much more likely to be limited by the services provided by the backend collector, the agent-to-collector is a load balance mechanism that allows the collector to scale linearly to improve capabilities.

4.3.2 Collector Layer

For the collector layer, the agent to collector has a load balance mechanism, and collector provides a non-differentiated service, so it can be linearly scaled. Its performance is mainly limited by the capabilities provided by the store layer.

4.3.3 Store Floor

For the store layer, HDFs and Kafka are distributed systems that can be scaled linearly. Bypass belongs to a temporary application, only corresponds to a certain class of logs, performance is not a bottleneck.

Selection of 4.4 Channel

Flume1.4.0, its official Memorychannel and FileChannel are available for everyone to choose from. The advantages and disadvantages are as follows:
    • Memorychannel: All events are kept in memory. The advantage is high throughput. The disadvantage is that the capacity is limited and the agent loses data in memory when it dies.
    • FileChannel: All events are saved in the file. The advantage is that the data can be recovered when the capacity is large and dead. The disadvantage is that the speed is slow.
The two channel, the advantages and disadvantages of the opposite, respectively, have their own suitable scene. For most applications, however, we want the channel to provide both high throughput and large caches. Based on this, we developed the Dualchannel.
    • Dualchannel: Developed based on Memorychannel and FileChannel. When the number of events piled up in the channel is less than the threshold, all events are saved in Memorychannel, sink reads data from Memorychannel, and when the number of events stacked in the channel is greater than the threshold, All events are automatically stored in the FileChannel and sink read data from the FileChannel. So when the system is working properly, we can use the high-throughput characteristics of the memorychannel, and when the system has an exception, we can take advantage of the FileChannel's large cache feature.

4.5 and scribe Compatible

At the beginning of the design, we requested that each type of log should have a category counterpart, and that the flume agent provided Avrosource and Scribesource two services. This will remain relative to the previous scribe, reducing the cost of changing the business.

4.6 Permissions Control

In the current log collection system, we only use the simplest permission controls. Only the set category can be entered into the storage system. So the current permission control is category filtering. If the rights control is placed on the agent side, the advantage is that the garbage data can be better controlled to circulate in the system. But the disadvantage is that configuration changes are troublesome, and each additional log requires a reboot or overload of the agent configuration. If the privilege control is placed on the collector side, the advantage is that it is convenient to modify and load the configuration. The disadvantage is that some data that is not registered may be transferred between Agent/collector. Considering that the log transfer between agent/collector is not a system bottleneck, and the current log collection is an internal system, security issues are secondary issues, so choose to use collector-side control.

4.7 Live Streaming available

Some of the company's business, such as real-time recommendations, anti-crawler services and other services, need to deal with real-time data flow. So we want Flume to be able to export a live stream to the Kafka/storm system. A very important requirement is that the real-time data stream should not be affected by the speed of other sink to guarantee the speed of the real-time data stream. This, we are isolated by setting different channel in collector, and Dualchannel's large capacity ensures that the log processing is not affected by sink.

5 system Monitoring

Monitoring is an essential part of a large, complex system. Design reasonable monitoring, can be found in the abnormal situation in time, as long as there is a mobile phone, you can know whether the system is functioning normally. For our log collection system, we have built a multi-dimensional monitoring to prevent unknown anomalies from occurring.

5.1 Transmission speed, congestion situation, write HDFs speed

By sending the data to Zabbix, we can draw a graph of the number of sends, congestion, and the speed of the write HDFs, and we will call the police to find out the reason for the super-expected congestion. The following is the speed at which Flume Collector Hdfssink writes data to HDFs:
The following is the volume of events data in the FileChannel of flume collector:

5.2 Flume Write Hfds status monitoring

Flume write to the HDFs into the TMP file, for particularly important logs, we will every 15 minutes or so to check whether each collector produced a TMP file, for the non-normal generation of TMP file collector and logs we need to check whether there is an exception. This allows you to discover flume and log anomalies in a timely manner.

5.3 Log Size Exception monitoring

For important logs, we will monitor the size of the log every hour of the week whether there is a large fluctuation, and give reminders, this alarm effectively found the abnormal log, and repeatedly found the application of the log sent by the exception, timely give the other party feedback, help them to repair their own system early. As explained above, we can see that the flume-based log collection system is a distributed service with high availability, high reliability, and scalable features.

Flume-based Log collection system (i) Architecture and design

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.