Design a Real-Time Distributed log stream collection platform (tail Logs-> HDFS)

Source: Internet
Author: User
At present, there are several open-source distributed log systems in the industry, such as notify + rsync, Facebook's scribe, Apache's chukwa, linkin's kafa, and cloudera's flume. For more information about these open-source distributed log systems, see this article. Article 1. Although these open-source log systems provide real-time log tail output functions, however, after the tail process goes down, it cannot be collected from the collection point, but it can only be restarted, so there are many duplicate logs. 2. These open-source log systems provide comprehensive functions, however, for our platform, It is our goal to design a simple and controllable platform with only some functions. This article describes the technical details of a log collection platform designed by the author in the past, it has been applied to e-commerce open platforms. The application scenario is as follows: each application client on the platform generates a large number of logs in real time, and the logs of each application client need to be collected in real time and sent to the distributed file system, for subsequent data mining and analysis. The data is collected to HDFS and a file is generated on a regular basis every day (the file prefix is the date, and the suffix is the serial number starting from 0). When the file size exceeds the specified size, A new file is automatically generated. The file prefix is the current date, And the suffix is the current serial number. The system running architecture diagram and related descriptions are as follows: 1. First, start a log scanning thread (scanning the log directory). When a local log file is generated, A log tail task is generated and put into the log tail queue for scheduling. 2. Obtain a tail thread from the thread pool for the tail task in the task queue. This thread is responsible for Tail log files and output to MQ. When outputting data to MQ, record the current line number lsn to the lsn store. Here, if the log tail thread goes down, you can start tail (seek operation) from the log line at the time of downtime ). 3. The Collector node starts multiple threads to obtain log data from MQ and sorts the log categories (such as operation logs, access logs, and service logs ), different types of files are written to HDFS. If an error occurs during the write process (for example, HDFS is temporarily unavailable and there is an alarm mechanism here), it will be saved locally and imported to HDFS in batches after it returns to normal. 4. lsn recorder, which records the current tail line of the log. When tail is used, it is first written into lsn buffer. It is in the append mode. When the size exceeds a certain threshold of MB or a certain interval, write to LSN file, write only the last row number, and clear the buffer. The advantage of doing so is that the append data to the buffer (instead of the update operation) memory operation is very fast. Recovery from downtime. During the query, the buffer is searched first. If the buffer does not exist, the file is searched again. 5. to switch the HDFS file name, generate a regular file every day according to the following policy (the file prefix is the date, and the suffix is the serial number starting from 0 ), when the file size exceeds the specified size, a new file is automatically generated. The file prefix is the current date, And the suffix is the current serial number. Two triggers are implemented, namely sizetriger (when HDFS output stream writing is called, count the total size of the stream that has been written. If the total size exceeds a certain value, create a new file and output stream. Write operations point to the new output stream, and close the previous output stream) and timetriger (enable the timer. When this point is reached, automatically create a new file and output stream, redirect new writes to the stream, and close the previous output stream ). Description of writing data with HDFS flush: During HDFS operations, only when the stream flush (when the data packet is full, it will be automatically flush; or manually flush) and is disabled, to write data to the datenode hard disk. For HDFS, the default packet is 64 KB. When the data exceeds 64 KB, it is automatically output to the output stream of datanode. When close is called, the data that does not constitute a packet is automatically output to the datanode output stream and written to the datanode hard disk, report to namenode after completion (of course, when the data size in datanode exceeds a block such as 64 MB, it will also report to namenode)

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.