工欲善其事, its prerequisite
Many programmers may forget how important it is to document the behavior of an application, and you will realize the importance of log logs when you encounter high pressure concurrent bugs in a multithreaded environment.
Some people are happy to add this sentence in the code:
Log.info ("Happy and carefree Logging");
He may not even be aware of the importance of application logs in maintenance, tuning, and fault identification. I think SLF4J is the best log API, mostly because it supports a great way to inject a pattern:
Log.debug ("Found {} Records matching filter: ' {} '", records, filter);
Log4j, you can only do this:
Log.debug ("Found" + Records + "recordsmatching Filter:" + filter + "");
This writing is not only more verbose and less readable, but string concatenation affects efficiency (when the level does not require output).
SLF4J introduces the {} injection feature, and because of the avoidance of string concatenation each time, the ToString method is not invoked and no longer needs to be added to the isdebugenabled. SLF4J is an application of the exterior pattern, it is just a façade. Specifically, I recommend the Logback framework, which had already been advertised before, rather than the already complete log4j. It has many interesting features, and unlike log4j, it is actively developing and improving.
Another tool to recommend is PERF4J:
PERF4J is to System.currenttimemillis () as log4j are to System.out.println ()
Just as log4j is a better alternative to SYSTEM.OUT.PRINTLN, perf4j is more of a substitute for System.currenttimemillis (). I have introduced perf4j to a project and observed its performance under high loads. Both administrators and enterprise users were stunned by the beautiful charts provided by the gadget. We can view performance issues at any time. PERF4J should write an article specifically, now you can look at its developer's Guide. There is also a Ceki Gülcü (creator of LOG4J,SLF4J and Logback Engineering) that provides an easy way for us to remove reliance on commons-logging.
Don't forget the log level
Every time you want to add a row of logs, you will wonder, what kind of log level should I use here? About 90% of programmers don't pay much attention to this problem, they use one level to log logs, usually not info or debug. Why?
There are two advantages to the log frame compared to System.out: classification and level. Both allow you to selectively filter logs, either permanently or only when troubleshooting errors.
Error : A serious error has occurred and must be dealt with immediately. This level of error is not tolerated by any system. For example: null pointer exception, database unavailable, critical path of use case cannot continue execution.
WARN: The following process will continue, but it should be taken seriously. Actually here I want to have two levels: one is the obvious problem with the solution (for example, "Current data is not available, use cached data"), and another is a potential problem and suggestion (such as "program running in development mode" or "Management console password is not secure"). Applications can tolerate this information, but they should be checked and repaired.
DEBUG : What developers are concerned about. Later I'll talk about what kind of things should be recorded at this level.
TRACE : More detailed information is used only in the development phase. You may need to focus on this information for a short period of time after the product is online, but these logs are only temporary and should be turned off eventually. The difference between debug and trace is difficult to distinguish, but if you add a row of logs and then delete it after the development test, the log should be at trace level.
The above list is just a suggestion, you can record the log according to your own rules, but it is better to have some rules. My personal experience is: Do not log filtering at the code level, but use the correct log level to quickly filter out the information you want, which can save you a lot of time.
The last thing to say is this infamous is*enabled conditional statement. Some people like to add this before each log:
if (log.isdebugenabled ())
Log.debug ("Place for your commercial");
Personally, you should avoid adding this mess to your code. Performance does not seem to improve (especially after slf4j), and is more like premature optimization. And, you know, it's a little superfluous to do that? Very few times it is clear that this explicit judgment statement is needed unless we prove that the construction log message itself is too expensive. Otherwise, how to remember how to remember, let the log frame to worry about it.
Do you know what you're recording?
Every time you write a line of logs, take a moment to see what you have printed in the log file. Read your diary and find out where the anomalies are. First, you should at least avoid null pointer exceptions:
Log.debug ("Processing request with ID: {}", Request.getid ());
Have you confirmed that the request is not NULL?
The collection of records is also a large pit. If you use Hibernate to get a collection of domain objects from the database, write it carelessly : Log.debug ("Returning users: {}", users);
SLF4J only calls the ToString method when this statement does print, which is cool, of course. However, if memory overflows, n+1 Select the problem, thread starve to death, delay initialization exception, log storage space exhausted ... All this could happen. The best way to do this is to record only the ID of the object (or only the size of the collection). But collecting IDs requires calling the GetID method for each object, which is really not a simple thing to do in Java. Groovy has a great expansion operator (users*.id), in Java we can use the Commons Beanutils library to simulate the following:
Log.debug ("Returning user ids: {}", Collect (Users, "IDs"));
This is probably how the Collect method is implemented:
public static Collection Collect (Collection Collection, String PropertyName) {return
collectionutils.collect ( Collection, New Beantopropertyvaluetransformer (PropertyName));
Finally, the ToString method may not be implemented correctly or used.
First, in order to log logs, creating a ToString for each class is a ubiquitous practice, preferably using tostringbuilder to generate (but not the version of its reflection implementation).
Second, focus on arrays and atypical collections. The ToString implementation of arrays and some alternative collections may not have the ToString method of calling each element individually. You can use the Arrays#deeptostring method provided by the JDK. Often check your own printed log to see if there is some form of abnormal information.
Avoid side effects
Log printing generally does not have much impact on the performance of the program. A friend of mine recently ran a system on some special platforms that threw a hibernate lazyinitializationexception exception. As you may have guessed from this, when a session is connected, some log printing causes the collection of deferred initialization to be loaded. In this case, the log level is raised and the collection is no longer initialized. If you don't know the contextual information, how long will it take you to find this bug?
Another side effect is the speed at which the program runs. Quick answer to this question: if the log prints too much or does not use ToString and string concatenation correctly, log printing can adversely affect performance. How big can it be? Well, I've seen a program restart every 15 minutes because too many logs cause threads to starve to death. That's the side effect! From my experience, printing hundred for an hour is almost the upper limit.
Of course, this side effect is large if the business process is aborted due to a log printing exception. I've often seen people write this to avoid this:
try {
log.trace ("id=" + request.getuser (). GetId () + "accesses" + manager.getpage (). GETURL (). toString ())
} catch (NullPointerException e) {}
This is a real code, but please don't write it in order to make the world a cleaner spot.
The description should be clear
Each log record will contain data and descriptions. Look at this example:
Log.debug ("message processed");
Log.debug (Message.getjmsmessageid ());
Log.debug ("message with ID ' {} ' processed", Message.getjmsmessageid ());
What kind of log do you want to see when you're troubleshooting bugs in a strange system? Believe me, these examples are very common. There is also a negative pattern:
if (Message instanceof TextMessage)
//...
Else
Is it difficult to add message types, message IDs, and so on in this warning log? I know it happened, but what was wrong? What is the context information?
The third negative example is the "Magic Log". A real example: a lot of programmers on the team know that 3 & numbers follow! Number followed by a #, followed by a pseudo-random number of logs means "the message with ID XYZ is received." No one wants to change the log, someone knocks on the keyboard, selects a unique "&&&!#" string, and he can quickly find the information he wants.
As a result, the entire log file looks like a bunch of random characters. One wonders whether this is a Perl program.
The log file should be readable, clear, and self-describing. Don't use some magic numbers, record values, numbers, IDs, and their context. Record the processed data and what it means. Log what the program is doing. A good log should be a good document for the program code.
Did I mention not to print the password and personal information? Believe that there is no such a silly programmer.
Adjust your formatting.
The log format is a useful tool that virtually adds valuable contextual information to the log. But you should think about what information is contained in your format. For example, it doesn't make sense to record dates in logs that are written every hour, because your log name already contains this information. Conversely, if you don't record the thread name, when two threads work in parallel, you won't be able to track the thread through the log--the logs overlap. In a single-threaded application, that's fine, but that's a matter of the past.
From my experience, the ideal log format should include (except, of course, the log information itself): Current time (no date, millisecond precision), log level, thread name, simple log name (without full name) and news. This is what happens in Logback:
<appender name= "STDOUT" class= "Ch.qos.logback.core.ConsoleAppender" >
<encoder>
<pattern >%d{hh:mm:ss. SSS}%-5level [%thread][%logger{0}]%m%n</pattern>
</encoder>
The file name, class name, and line number are not listed, although they seem to be useful. I've also seen empty log records in my Code:
Log.info (""); Because the programmer thought the line number would be part of the log format, and he knew that if the empty log message appeared in 67 lines of the file, it meant that the user was authenticated. Not only that, the record class name method name, or the line number has a great effect on performance.
A relatively advanced feature of the log framework is the diagnostic context mapping (mapped diagnostic contexts). MDC is just a map of a thread-local. You can put any key-value pairs into this map, so that all of the log records for this thread can take the corresponding information from this map as part of the output format.
Logging parameters and return values for a method
If you find a bug in the development phase, you usually use the debugger to track specific reasons. Now suppose you don't want to use the debugger, for example, because the bug appeared in the user's environment a few days ago, and you can get only a few logs. What can you find from it?
If you follow the simple principle of printing the parameters and parameters of each method, you do not need a debugger at all. Of course each method may access the external system, blocking, waiting, and so on, which should be taken into account. Just refer to the following format:
Public String PrintDocument (Document doc, mode mode) {
log.debug ("Entering PrintDocument (doc={}, mode={})", Doc, mode);
String id = ...; Lengthy printing operation
Log.debug ("Leaving PrintDocument (): {}", id);
return ID;
}
Since you have recorded the log at the beginning and end of the method, you can manually find the inefficient code, and even detect the triggers that may cause deadlocks and starvation--you just have to look at the "entering" and see if there is no "leaving" behind it. If the meaning of your method name is clear, it will be a pleasure to clear the log. Also, the analysis of anomalies is much simpler, because you know what you're doing every step of the way. There are a lot of ways to record in your code, and you can do it with AOP slices. This reduces the number of duplicate code, but uses it with special care, which may result in a large amount of log output.
The most appropriate level for this type of log is Debug and trace. If you find that a method calls too frequently, logging its log may affect performance, just lower its log level, or delete the log directly (or the entire method call to leave only one?). But it's better to have more logs than less. As a unit test, your code should be full of logs just as its unit tests are everywhere. There is no part of the system that does not require a log at all. Remember, sometimes you have to know if your system is working properly, and you can only view the logs that are constantly being brushed.
Observing external systems
This advice is somewhat different from the previous one: if you are communicating with an external system, remember to record the data that is coming out of your system and reading it. System integration is a chore, and diagnosing problems with two applications (imagine different companies, environments, technical teams) is especially difficult. We recently found that recording complete message content, including the Apache CXF soap and HTTP headers, is very effective in the integration and testing phases of the system.
This is expensive, and if the performance is affected, you can only turn the log off. However, your system may run fast and hang fast, and you can do nothing about it? When integrated with an external system, you can only be extra careful and prepared to sacrifice a certain amount of overhead. If you're lucky enough, system integration is handled by the ESB, and it's best to record requests and responses on the bus. You can refer to this log component of the next mule.
Sometimes the amount of data exchanged with the external system determines that you can't write anything down. On the other hand, in the testing phase and early release, it is best to put everything in the log, do a good job of sacrificing performance preparation. You can do this by adjusting the log level. Take a look at the following tips:
Collection<integer> requestids =//...
if (log.isdebugenabled ())
log.debug ("Processing IDs: {}", requestids);
Else
If this logger is configured as a debug level, it will print the complete collection of request IDs. If it is configured to print info information, it will only output the size of the collection. You may ask me if I have forgotten the isinfoenabled condition, and take a look at the 2nd suggestion. One more note here is that the collection of IDs cannot be null. Although it can print normally in debug, it is null, but a large null pointer when configured as info. Remember the side effects mentioned in the 4th suggestion?
Correct record exception
First, do not record the exception, let the frame or container to do this. Of course there is an exception: if you throw an exception (RMI,EJB, etc.) from a remote service, the exception is serialized to ensure that they can be returned to the client (part of the API). Otherwise, the client receives a NOCLASSDEFFOUNDERROR or other bizarre exception, rather than a real error message.
Exception logging is one of the most important responsibilities of logging, but many programmers tend to treat logging as a way to handle exceptions. They usually simply return the default (usually a null,0 or an empty string), pretending that nothing has happened. There are times when they first record the exception and then wrap it up and throw it out:
Log.error ("IO exception", e);
This usually prints the stack information two times, because the place where the mycustomexception exception is caught is printed again. Log, or package, and then throw it away, don't use it at the same time, or your log will look confusing.
What if we really want to log the logs? For some reason (presumably not to read APIs and documents?) And about half of the log records I think are wrong. For a small test, which of the following log statements can correctly print null pointer exceptions?
try {
Integer x = null;
++x;
} catch (Exception e) {
log.error (e);//a
Log.error (E, E);//b
Log.error ("" + e); C
Log.error (e.tostring ());//d
Log.error (E.getmessage ())//e
log.error (null, e);
//f Log.error ("", e); G
Log.error ("{}", e);//h
Log.error ("{}", E.getmessage ())//i
log.error ("Error reading Configuration file: "+ E); J
log.error ("Error reading configuration file:" + e.getmessage ()); K
log.error ("Error reading configuration file", e);//l
}
It's strange that only G and L (this better) are right! A and B are compiled under SLF4J, but others throw out the stack tracking information or print incorrect information. For example, E does not print anything because the null pointer exception itself does not provide any exception information and the stack information is not printed. Remember, the first parameter is usually textual information about the error itself. Do not write the exception information, print the log it will automatically come out, in front of the stack information. But to print this, of course, you have to pass the exception to the second parameter.
Logs should be readable and easy to parse
There are two groups of users who are interested in your journal: we human (whether you disagree, the code farmer is here), and the computer (usually the shell script written by the system administrator). The log should be suitable for both users to understand. If someone is watching your program's log behind you, you see this:
Then you must have failed to follow my advice. Logs should be as easy to read and understand as code.
On the other hand, if your program generates half a gigabyte of logs per hour, no one or any graphical text editor can read them. That's when our old folks, grep,sed and awk, came on the pitch. If possible, your log logs are best for both people and computers to understand, not to format numbers, use some formats that make it easy to match, and so on. If it is not possible, print the data in two formats:
Log.debug ("Request TTL set to: {}", New Date (TTL), TTL);
Request TTL set to:wed Apr 20:14:12 CEST (1272478452437)
final String duration = Durationformatutils.forma Tdurationwords (Durationmillis, True, true);
Log.info ("Importing took: {}ms ({})", Durationmillis, duration);
The computer sees "Ms after 1970 epoch" Such a time format will thank you, and people are happy to see "1 days 10 hours 17 minutes 36 seconds" such things.
In short, the diary can be written as elegant as a poem, if you are willing to ponder.
The above is the Java log to adjust the output format of the data collation, interested friends can refer to the next.