Ultra-lightweight log Implementation of tens of billions of transaction systems with a daily transaction volume and tens of billions of Transaction Systems

Source: Internet
Author: User

Ultra-lightweight log Implementation of tens of billions of transaction systems with a daily transaction volume and tens of billions of Transaction Systems

First, let's talk about the past ~~ Two years ago, I worked in a traditional financial software company to develop a large transaction system for an exchange. The price of the transaction subject is streaming data, and the price triggers the transaction, T + 0 Trading System (similar to stock trading, but not the stock, but other things, but can be opened at any time ). Because the system needs to record a large amount of price data, transaction information and order flow, and the system has extremely high performance requirements (sensitivity up to milliseconds), it is necessary to avoid Log service as a system performance bottleneck. By performing performance stress tests on several general-purpose logs (such as log4j and logback) and considering that they are relatively bloated as general-purpose logs, you decide to write a log tool to support system functions and performance requirements. At that time, the practice was simply to write the log implementation as an util class in the project, with only a few hundred lines of code.

After the system went online for two months, the daily turnover was 20 billion RMB, up to 44 billion RMB, and the peak turnover was 4000 transactions per second. The system is huge, but hundreds of lines of code perfectly support important log services!

In view of its outstanding performance, it took a little time to extract it as an independent log component named FLogger. The Code has almost no changes and is now hosted on GitHub (FLogger), Interested children's shoes can be cloned to learn about and improve. At present, its implementation is very simple (pure) single (pure.

The above is the background of FLogger. Now let's go to the topic.

Features

Although FLogger has only a few hundred lines of code, it has rich features although it is small and dirty:

  • Dual-Buffer Queue
  • Multiple disk flushing mechanisms, including time triggering, cache size triggering, and forced service shutdown triggering
  • Multiple RollingFile mechanisms, supporting file size triggering, daily triggering, and other Rolling methods
  • Multiple log levels, including debug, info, warn, error, and fatal
  • Hot loading triggered by log events
  • Ultra-lightweight, independent from any third-party library
  • Performance guarantee, successfully used in a transaction system with a daily transaction volume of tens of billions
Use

Since it is a lightweight log, it must be very simple to use. To maximize user habits, Flogger provides almost the same log API as log4j. You only need to get an instance first, and the following usage is very simple:

// Obtain the singleton FLogger logger = FLogger. getInstance (); // Simple api, you only need to specify the content logger.info ("Here is your message... "); // specify the log level and content. The file name is automatically mapped to logger. writeLog (Constant. INFO, "Here is your customized level message... "); // specify the log output file name, log level, and content logger. writeLog ("error", Constant. ERROR, "Here is your customized log file and level message... ");

Before use, you need to create the log. properties file in the project root path. The configuration is as follows:

########## Public environment configuration ########### Character Set CHARSET_NAME = UTF-8 ########### log information configuration ########## Log Level 0: debugging information 1: Common Information 2: Warning Information 3: Error information 4: severe error information LOG_LEVEL =, 4 # log file storage path LOG_PATH =. /log # interval between log writing files (1000 ms by default) WRITE_LOG_INV_TIME = 1000 # size of a single log file (10 M by default) SINGLE_LOG_FILE_SIZE = 10485760 # size of a Single Log File Cache (10 KB by default) SINGLE_LOG_CACHE_SIZE = 10240

Of course, to provide maximum convenience, the log provides default values for all configuration items, so you don't have to worry about throwing an exception when a configuration file is missing.

At this point, you may be curious about how the log format printed by FLogger is. Will it be messy and incomprehensible, or will it be impossible to judge the context if the information is incomplete? Well, you have to worry about it. FLogger provides a very standard and practical log format, which makes it easy for you to understand and find the relevant context.

Let's take a look at the result printed by the demo code above:

Info. log

[INFO] 2016-12-06 21:07:32:840 [main] Here is your message...

Warn. log

[WARN] 2016-12-06 21:07:32:842 [main] Here is your customized level message...

Error. log

[ERROR] 2016-12-06 21:07:32:842 [main] Here is your customized log file and level message...

As you can see above, you can clearly distinguish the log level, time, content, and other information. It is clear that a log consists of the following elements:

[Log level] precise to millisecond [current thread name] log Content

Of course, in consideration of convenience, FLogger does not currently support user-defined log formats. After all, it is not designed to be used as a universal or highly customizable log.

Source code parsing

The above is a description of how to use it. The following describes how to implement Logic Source code parsing for the features of FLogger.

Dual-Buffer Queue

FLogger uses a dual-Buffer Queue internally. What is a dual-Buffer Queue? What is its role?

FLogger maintains an internal object LogFileItem for each log file, which is defined as follows:

Public class LogFileItem {/** does not contain paths. The name of a log file without an extension is shown in MsgInner */public String logFileName = ""; /** complete log name including path */public String fullLogFileName = "";/** Current Log File Size */public long currLogSize = 0; /** currently in use Log cache */public char currLogBuff = 'a '; /** log buffer List A */public ArrayList <StringBuffer> alLogBufA = new ArrayList <StringBuffer> (); /** log buffer List B */public ArrayList <StringBuffer> alLogBufB = new ArrayList <StringBuffer> (); /** time when the next log is output to the file */public long nextWriteTime = 0;/** date of the last write */public String lastPCDate = ""; /** current cache size */public long currCacheSize = 0 ;}

Each time a log is written, the log Content is added to the currently used ArrayList <StringBuffer> As a StringBuffer, and the other is idle. When logs in the memory are output to disk files, the currently used ArrayList <StringBuffer> and idle ArrayList <StringBuffer> are exchanged for roles, after the switch, the idle ArrayList <StringBuffer> receives the log content, while the ArrayList <StringBuffer> that previously had the log Content is used to output the log to the disk file. This prevents the log Content receiving (the so-called stop-the-world effect) and multithreading issues from being affected each time the disk is flushed. The process is as follows:

// Synchronized (lfi) {if (lfi. currLogBuff = 'A') {lfi. alLogBufA. add (logMsg);} else {lfi. alLogBufB. add (logMsg);} lfi. currCacheSize + = CommUtil. stringToBytes (logMsg. toString ()). length ;}

Log flushing code:

// Obtain the cache list to be output. ArrayList <StringBuffer> alWrtLog = null; synchronized (lfi) {if (lfi. currLogBuff = 'A') {alWrtLog = lfi. alLogBufA; lfi. currLogBuff = 'B';} else {alWrtLog = lfi. alLogBufB; lfi. currLogBuff = 'a';} lfi. currCacheSize = 0;} // create the log file createLogFile (lfi); // output the log int iWriteSize = writeToFile (lfi. fullLogFileName, alWrtLog); lfi. currLogSize + = iWriteSize;
Multi-disk flushing mechanism

FLogger supports multiple disk flushing mechanisms:

  • Disk flushing interval trigger
  • Memory buffer size trigger
  • Force exit

The following is a one-to-one analysis.

Disk flushing interval trigger

The configuration items are as follows:

# Interval between log file writing (1000 ms by default) WRITE_LOG_INV_TIME = 1000

When the interval is exceeded from the last flushing time, the memory log is flushed to the disk.

Memory buffer size trigger

The configuration items are as follows:

# The cache size of a single log file (10 KB by default) SINGLE_LOG_CACHE_SIZE = 10240

When the size of the Memory Buffer Queue exceeds the configured size, the memory log is flushed to the disk.

Force exit

FLogger registers the JVM to close the hook ShutdownHook. When the JVM is disabled normally, the hook triggers the forced disk flushing to avoid memory log loss. The related code is as follows:

public FLogger(){    Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {        @Override        public void run() {            close();        }    }));}

When the JVM exits unexpectedly, it cannot guarantee that all logs in the memory are stored on the disk. However, you can improve the real-time log flushing by using a compromise: Set SINGLE_LOG_CACHE_SIZE = 0 or WRITE_LOG_INV_TIME = 0.

The disk flushing code is as follows:

/** Thread method */public void run () {int I = 0; while (bIsRun) {try {// output to the file flush (false ); // obtain the log level again if (I ++ % 100 = 0) {Constant. pai_log_level = CommUtil. getConfigByString ("LOG_LEVEL", ", 4"); I = 1 ;}} catch (Exception e) {System. out. println ("An error occurred while enabling Log service... "); e. printStackTrace () ;}}/ ** close Method */public void close () {bIsRun = false; try {flush (true);} catch (Exception e) {System. out. println ("disabling Log service error... "); e. printStackTrace () ;}}/*** outputs cached logs to the file * @ param bIsForce: determines whether to forcibly output cached logs to the file */private void flush (boolean bIsForce) throws IOException {long currTime = System. currentTimeMillis (); Iterator <String> iter = logFileMap. keySet (). iterator (); while (iter. hasNext () {LogFileItem lfi = logFileMap. get (iter. next (); if (currTime> = lfi. nextWriteTime | SINGLE_LOG_CACHE_SIZE <= lfi. currCacheSize | bIsForce = true) {// obtain the cache list to be output. ArrayList <StringBuffer> alWrtLog = null; synchronized (lfi) {if (lfi. currLogBuff = 'A') {alWrtLog = lfi. alLogBufA; lfi. currLogBuff = 'B';} else {alWrtLog = lfi. alLogBufB; lfi. currLogBuff = 'a';} lfi. currCacheSize = 0;} // create the log file createLogFile (lfi); // output the log int iWriteSize = writeToFile (lfi. fullLogFileName, alWrtLog); lfi. currLogSize + = iWriteSize ;}}}
Multi-RollingFile Mechanism

Similar to log4j/logback, FLogger also supports multiple RollingFile mechanisms:

  • Rolling by file size
  • Daily Rolling

Rolling by file size. The configuration item is:

# Size of a single log file (default: 10 MB) SINGLE_LOG_FILE_SIZE = 10485760

That is, when the file size exceeds the configured size, a new file log will be created, and the old file will be renamed as "log file name _ date_time. log" (such as info_20161208_011105.log ).

Daily Rolling generates different files every day.

The list of generated log files can be found below:

info_20161207_101105.loginfo_20161207_122010.loginfo_20161208_011110.loginfo_20161208_015010.loginfo.log

The log file currently being written is info. log.

The key code is as follows:

/*** Create a log file ** @ param lfi */private void createLogFile (LogFileItem lfi) {// The current system date String currPCDate = TimeUtil. getPCDate ('-'); // if the size of a single file is exceeded, split the file if (lfi. fullLogFileName! = Null & lfi. fullLogFileName. length ()> 0 & lfi. currLogSize> = LogManager. SINGLE_LOG_FILE_SIZE) {File oldFile = new File (lfi. fullLogFileName); if (oldFile. exists () {String newFileName = Constant. CFG_LOG_PATH + "/" + lfi. lastPCDate + "/" + lfi. logFileName + "_" + TimeUtil. getPCDate () + "_" + TimeUtil. getCurrTime () + ". log "; File newFile = new File (newFileName); boolean flag = oldFile. renameTo (newF Ile); System. out. println ("the log has been automatically backed up as" + newFile. getName () + (flag? "Successful! ":" Failed! "); Lfi. fullLogFileName = ""; lfi. currLogSize = 0 ;}/// create a file if (lfi. fullLogFileName = null | lfi. fullLogFileName. length () <= 0 | lfi. lastPCDate. equals (currPCDate) = false) {String sDir = Constant. export _log_path + "/" + currPCDate; File file = new File (sDir); if (file. exists () = false) {file. mkdir ();} lfi. fullLogFileName = sDir + "/" + lfi. logFileName + ". log "; lfi. lastPCDate = currPCDate; file = new File (lfi. fullLogFileName); if (file. exists () {lfi. currLogSize = file. length ();} else {lfi. currLogSize = 0 ;}}}
Multiple log levels

FLogger supports multiple log levels:

  • DEBUG
  • INFO
  • WARN
  • ERROR
  • FATAL

FLogger provides simple APIs for each log level.

When an error or fatal-level log is printed, FLogger outputs the log content to the console by default.

Hot loading

FLogger supports hot loading, and FLogger does not adopt the event-driven method internally (that is, FLogger real-time hot loading is notified of related events when configuration files are added, modified, and deleted ), the method of hot loading is fixed frequency. The specific implementation is to perform hot loading (the frequency can be adjusted) only after 100 disk flushing is completed. The key code is as follows:

Int I = 0; while (bIsRun) {try {// wait for a certain time Thread. sleep (200); // output to the file flush (false); // re-obtain the log level if (I ++ % 100 = 0) {Constant. pai_log_level = CommUtil. getConfigByString ("LOG_LEVEL", ", 4"); // hot loading of other configuration items ...... I = 1 ;}} catch (Exception e) {System. out. println ("An error occurred while enabling Log service... "); e. printStackTrace ();}}

This is done to keep the code concise and functional purity. Event-driven hot loading is undoubtedly a better method of hot loading, but you need to create additional threads and start the event listening on the configuration file. If you are interested, you can implement it on your own.

Performance Guarantee

FLogger successfully supported the Log Service of tens of billions of transaction systems with a daily transaction volume. Its performance has been tested. Next, let's take a simple Performance Comparison Between FLogger and log4j.

Test environment: Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz 3.20 GHz 4.00 GB Memory 64-bit Operating System

Test scenario: A total of 1000000 logs are written to a single log file in 72 bytes.

The FLogger configuration is as follows:

# Interval between log file writing and WRITE_LOG_INV_TIME = 0 # SINGLE_LOG_FILE_SIZE = 104857600 # SINGLE_LOG_CACHE_SIZE of A Single Log File Cache = 0

The preceding configuration ensures that all logs are written to a single file, and each record is not cached in the memory to reduce test errors.

Test code:

FLogger logger = FLogger. getInstance (); // FLogger // Logger logger = Logger. getLogger (Log4jTest. class); // log4jString record = "Performance Testing about log4j and cyfonly customized java project log. "; // 72-byte long st = System. currentTimeMillis (); for (int I = 0; I <1000000; I ++) {logger.info (record);} long et = System. currentTimeMillis (); System. out. println ("FLogger/log4j write 1000000 records with each record 72 bytes, cost:" + (et-st) + "millseconds ");

Log content:

FLogger:[INFO] 2016-12-06 21:40:06:842 [main] Performance Testing about log4j and cyfonly customized java project log.log4j:[INFO ]2016-12-06 21:41:12,852, [main]Log4jTest:12, Performance Testing about log4j and cyfonly customized java project log.

Test results (average value for 10 executions ):

FLogger write 1000000 records with each record 72 bytes, cost :2144 millsecondslog4j write 1000000 records with each record 72 bytes, cost :cost :12691 millseconds

Note: The test result shows the time when the logs are flushed to the disk successfully. In addition to the impact of various environments, there is a slight error. Here, we only perform a simple test, which is not the strictest and most equitable test comparison. Interested children's shoes can be tested with higher accuracy. You are welcome to discuss it in private. QQ: 869827095.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.