In real life, logging is important. Bank transfer will have a record of the transfer, the aircraft during the flight, there will be a black box (flight data logger) to record everything during the flight. If there are any problems, people can use log data to figure out what exactly happened. Logging is equally important for system development, commissioning, and operation. Without logging, you can hardly figure out what's going on when the program crashes. For example, logging is necessary when you are writing a server program. The log files for the ezcomet.com server are shown below.
After the service crashed, if there were no logs, I could hardly know what was wrong. Logging is important not only for servers, but also for desktop graphics applications. For example, when your client's PC program crashes, you can ask them to send you a log file so you can find out where the problem is. Believe me, in a different PC environment, you will never know what the strange problem is. I have received such an error log.
2011-08-22 17:52:54,828-root-error-[Errno 10104] getaddrinfo failedtraceback (most recent call last): File "
", line 124, in main file"
", line A, in __init__ file" H:workspaceprojectbuildpyi.win32mrdjoutp Yz1.pyz/wx._core ", line 7978, in __init__ File" H:workspaceprojectbuildpyi.win32mrdjoutpyz1.pyz/wx._core ", line 7552, In _bootstrapapp file "
", line +, in OnInit file "H:workspaceprojectbuildpyi.win32mrdjoutpyz1.pyz/twiste D.internet.wxreactor ", line 175, in install File" h:workspaceprojectbuildpyi.win32mrdjoutpyz1.pyz/twisted.internet._ Threadedselect ", line 106, in __init__ File" H:workspaceprojectbuildpyi.win32mrdjoutpyz1.pyz/twisted.internet.base ", Line 488, in __init__ File ' h:workspaceprojectbuildpyi.win32mrdjoutpyz1.pyz/twisted.internet.posixbase ', line 266, in Installwaker File "H:workspaceprojectbuildpyi.win32mrdjoutpyz1.pyz/twisted.internet.posixbase", line-up, in __init__ File "H:workspaceprojectbuildpyi.win32mrdjoutpyz1.pyz/socket", line 224, in Methgaierror: [Errno 10104] Getaddrinfo Failed
I finally found out that this client's PC was infected by a virus that caused the call to GetHostName function to fail. Look, if there's no log, you can find out how you might know that.
PrintOut is not a good idea.
Although logging is important, not all developers are able to use it correctly. I've seen some developers log on like this, insert print statements during development, and then remove these statements after development. Just like this:
print ' Start reading database ' records = Model.read_recrods () print ' # Records ', Recordsprint ' Updating record ... ' Model.up Date_records (records) print ' Done '
This is useful for simple scripting programs, but if it's a complex system, you might as well not use it this way. First of all, you have no way to leave only extremely important messages in the log file. You'll see a lot of message logs. But you can't find any useful information. There's no way you can control the code except to remove the output statement, but it's very likely that you forgot to remove the useless output. Furthermore, all the information from the print output is in the standard output, which will seriously affect your viewing of other output data from the standard output. Of course, you can also output the message to stderr, but it's not good to use print as a way to log records.
Use Python's standard log module
So, what is the correct way to log logs? In fact, it's very simple, using Python's standard log module. Thanks to the Python community, the logs are made into a standard module. It is very easy to use and very flexible. You can use the log system like this:
Import Logginglogging.basicconfig (level=logging.info) logger = Logging.getlogger (__name__) logger.info (' Start Reading database ') # Read Database Hererecords = {' john ': ' Tom ': 66}logger.debug (' Records:%s ', Records) logger.info (' U Pdating Records ... ') # Update records herelogger.info (' Finish Updating Records ')
When you run it, you can see:
Info:__main__:start reading databaseinfo:__main__:updating Records ... info:__main__:finish Updating Records
You might ask how this is different from using print. It has the following advantages:
You can control the level of messages and filter out messages that are not important.
You can decide where to export and how to output it.
There are many important levels to choose from, Debug, info, warning, error, and critical. By giving logger or handler different levels, you can output only error messages to a particular record file, or only debug information when debugging. Let's change the logger level to DEBUG and look at the output:
Logging.basicconfig (level=logging. DEBUG)
The output becomes:
Info:__main__:start reading databasedebug:__main__:records: {' john ': ' Tom ': 66}info:__main__:updating Records ... Info:__main__:finish Updating records
As you can see, when we change the logger level to debug, the debug record appears in the output. You can also choose how to handle these messages. For example, you can use Filehandler to write records into a file:
Import Logginglogger = Logging.getlogger (__name__) Logger.setlevel (logging.info) # Create a file Handlerhandler = logging . Filehandler (' Hello.log ') Handler.setlevel (logging.info) # Create a logging Formatformatter = logging. Formatter ('% (asctime) s-% (name) s-% (levelname) s-% (message) s ') Handler.setformatter (Formatter) # Add the handlers to the Loggerlogger.addhandler (handler) logger.info (' Hello baby ')
Output log records at the appropriate level
With the flexible logging module, you can output logging to any location and configure them at the appropriate level. So you might ask, what is the right level? I will share some of my experience here.
In most cases, you don't want to read too much detail in the log. Therefore, only you will use the debug level during the debugging process. I only use debug to get detailed debugging information, especially when the amount of data is large or very high, such as the intermediate state of each loop within the algorithm.
def complex_algorithm (items): for me, item in enumerate (items): # do some complex algorithm computation Logger.debug ('%s iteration, item=%s ', I, item)
I use the INFO level in everyday transactions such as request processing or server status changes.
def handle_request (Request): logger.info (' Handling request%s ', request) # handle request here result = ' Result ' logger.info (' Return Result:%s ', result) def start_service (): logger.info (' Starting service at Port%s ... ', Port service.start () logger.info (' service is started ')
I use WARNING when there are important events, but not errors. For example, when a user logs in a bad password, or when the connection slows down.
Def authenticate (user_name, password, ip_address): if user_name! = user_name and password! = password: Logger.warn (' Login attempt to%s from IP%s ', user_name, ip_address) return False # does authentication here
Error levels are definitely used when errors occur. such as throwing exceptions, IO operations failed, or connection problems.
def get_user_by_id (user_id): user = Db.read_user (user_id) if user is None: logger.error (' Cannot find user With user_id=%s ', user_id) return user return user
I seldom use CRITICAL. When something really bad happens, you can use that level to record it. For example, memory exhaustion, disk full or nuclear crisis (Hope never happens: S).
Although it is not mandatory to set the name of logger to __name__, doing so will bring us many benefits. In Python, the name of the variable __name__ is the name of the current module. For example, calling Logger.getlogger (__name__) in the module "Foo.bar.my_module" is equivalent to calling Logger.getlogger ("Foo.bar.my_module"). When you need to configure logger, you can configure it to "Foo" so that all modules in the package Foo will use the same configuration. When you read a log file, you can see exactly which module the message came from.
Catch the exception and use Traceback to record it
It's a good habit to write down when something goes wrong, but if you don't have a traceback, it doesn't work at all. You should catch exceptions and record them with Traceback. For example, the following:
Try: open ('/path/to/does/not/exist ', ' RB ') except (Systemexit, keyboardinterrupt): raiseexcept Exception, E: logger.error (' Failed to open file ', exc_info=true)
Using the parameter exc_info=true to call the Logger method, the Traceback is output to logger. You can see the following results
error:__main__:failed to open File Traceback (most recent call last): file "example.py", line 6, in
open ( '/path/to/does/not/exist ', ' RB ') IOError: [Errno 2] No such file or directory: '/path/to/does/not/exist '
Python uses the logging module to log logs involving four main classes, which are best used in the official documentation:
Logger provides an interface that the application can use directly;
Handler sends the log record (created by logger) to the appropriate destination output;
Filter provides a fine-grained device to determine which log record to output;
Formatter determines the final output format of the log record.
The logging module is a new feature introduced in 2.3, and here are some common class and module-level functions
Module-level functions
Logging.getlogger ([name]): Returns a Logger object that returns root if no name is specified logger
Logging.debug (), Logging.info (), logging.warning (), Logging.error (), logging.critical (): Set the log level for root logger
Logging.basicconfig (): Establishes a streamhandler for the logging system with the default formatter, sets the base configuration and adds it to root logger
Each program obtains a logger before outputting the information. Logger usually corresponds to the program's module name, such as the chat tool's graphical interface module can get its logger:
Log=logging.getlogger ("Chat.gui")
And the core module can do this:
Log=logging.getlogger ("Chat.kernel")
Logger.setlevel (LEL): Specifies the lowest log level, and the level below LEL is ignored. Debug is the lowest built-in level, critical is the highest
Logger.addfilter (filt), Logger.removefilter (filt): Add or remove the specified filter
Logger.addhandler (HDLR), Logger.removehandler (HDLR): Add or remove the specified handler
Logger.debug (), Logger.info (), logger.warning (), Logger.error (), logger.critical (): Log levels you can set
The level of the logger is set to the following levels:
NOTSET < DEBUG < INFO < WARNING < ERROR < CRITICAL
If the Looger level is set to info, then logs less than the info level are not output, and logs that are greater than or equal to the info level are output
Handlers
The handler object is responsible for sending relevant information to the specified destination. Python's log system can be used in a variety of handler. Some handler can output information to the console, some logger can output the information to a file, and some handler can send information to the network. If you feel that it is not enough, you can write your own handler. Multiple handler can be added through the AddHandler () method
Handler.setlevel (LEL): Specifies the level of information being processed, and information below the LEL level is ignored
Handler.setformatter (): Choose a format for this Handler
Handler.addfilter (filt), Handler.removefilter (filt): Add or remove a filter object
Formatters
The formatter object sets the final rules, structure, and contents of the log information, the default time format is%y-%m-%d%h:%m:%s, and the following are some of the information commonly used by formatter
Set Filter
A careful friend must find that the format of the parameter is similar to "A.B.C" when calling Logging.getlogger (). This format is actually used in order to configure the filter. Take a look at this piece of code:
Log=logging.getlogger ("Chat.gui.statistic")
console = logging. Streamhandler ()
Console.setlevel (Logging.info)
Formatter = logging. Formatter ('% (asctime) s% (levelname) s% (message) s ')
Console.setformatter (Formatter)
Filter=logging. Filter ("Chat.gui")
Console.addfilter (Filter)
Log.addhandler (console)
Unlike the previous one, we added a filter on the handler. Now when we output the log information, we go through the filter processing. A filter named "A.B" gives only logger output information with the name "a.b" prefix. Multiple filters can be added, and the log information is not exported as long as a filter is rejected. Of course the logger named "A" prefix will output information. In addition, filters can also be added in logger.
Multiple handler can be attached to each logger. Next, let's introduce some common handler:
1) logging. Streamhandler
Using this handler, you can output information to any file object, such as Sys.stdout or Sys.stderr. Its constructor is:
Streamhandler ([STRM])
Where the STRM parameter is a file object. Default is Sys.stderr
2) logging. Filehandler
Similar to Streamhandler, used to output log information to a file. But Filehandler will open this file for you. Its constructor is:
Filehandler (Filename[,mode])
FileName is a file name and you must specify a file name.
Mode is how the file is opened. See the use of the Python built-in function open (). The default is ' a ', which is added to the end of the file.
3) Logging.handlers.RotatingFileHandler
This handler is similar to the filehandler above, but it can manage the file size. When the file reaches a certain size, it automatically renames the current log file and then creates a new log file with the same name to continue the output. For example, the log file is Chat.log. When the chat.log reaches the specified size, Rotatingfilehandler automatically renames the file to Chat.log.1. However, if chat.log.1 already exists, it will first rename chat.log.1 to chat.log.2 ... Finally, re-create the Chat.log and continue to output the log information. Its constructor is:
Rotatingfilehandler (filename[, mode[, maxbytes[, Backupcount]])
where filename and mode two parameters are the same as Filehandler.
The maxbytes is used to specify the maximum file size for the log file. If MaxBytes is 0, it means that the log file can be infinitely large, and the renaming process described above does not occur.
The backupcount is used to specify the number of reserved backup files. For example, if you specify 2, when the renaming process described above occurs, the original chat.log.2 is not renamed, but is deleted.
4) Logging.handlers.TimedRotatingFileHandler
This handler is similar to Rotatingfilehandler, however, it does not determine when to recreate the log file by judging the size of the file, but instead automatically creates a new log file at a certain time interval. The process of renaming is similar to Rotatingfilehandler, but the new file is not an appended number but the current time. Its constructor is:
Timedrotatingfilehandler (filename [, when [, interval [, Backupcount]])
where the filename parameter and the Backupcount parameter and the Rotatingfilehandler have the same meaning.
Interval is the time interval.
The When parameter is a string. A unit that represents a time interval, not case-sensitive. It has the following values:
s S
M min
H hours
D Day
W per week (Interval==0 on behalf of Monday)
Midnight every morning
5) Logging.handlers.SocketHandler
6) Logging.handlers.DatagramHandler
Similar to the above two handler, log messages are sent to the network. The difference is that the former uses the TCP protocol, which uses the UDP protocol. Their constructors are:
Handler (host, Port)
Where host is the hostname and port is the name
7) Logging.handlers.SysLogHandler
8) Logging.handlers.NTEventLogHandler
9) Logging.handlers.SMTPHandler
) Logging.handlers.MemoryHandler
One) Logging.handlers.HTTPHandler
# encoding:utf-8#import Logging#format = '% (asctime) -15s% (ClientIP) s% (user) -8s% (message) s ' #logging. Basicconfig ( Format=format) #d = {' ClientIP ': ' 192.168.0.1 ', ' user ': ' Fbloggs '} #logger = Logging.getlogger (' TCPServer ') # Logger.warning (' Protocol Problem:%s ', ' connection reset ', extra=d) #FORMAT = '% (asctime) -15s% (message) s ' # Logging.basicconfig (filename = "C:\\users\\june\\desktop\\1.txt", level = logging. Debug, FileMode = "A", Format=format) #logging. Debug (' This is a message ') #logging. Debug (' Test ') #import Logging#import datetime# #curDate = Datetime.date.today ()-Datetime.timedelta (days=0) #logName = ' c:\\users\\june\\desktop\\error_% S.log '%curdate# #logging. Basicconfig (level=logging.info,# format= '% (asctime) s% (filename) s[line:% (Lineno) d] % (levelname) s% (message) s ', # #datefmt = '%a,%d%b%Y%h:%m:%s ', # filename=logname,# Filemode= ' A ') # # #2013 -10-21 03:25:51,509 Writelog.py[line:14] Info This is INFO message# #2013-10-21 03:25:51,510 Writelog.py[line:15] WARNING This was WARNING message#logging.debug (' This is Debug message ') #logging. info (' This is Info message ") #logging. Warning (' This is warning message ') Import Loggingimport Logging.configlogging.config.fileConfig ("logging.conf") #create Loggerloggerinfo = Logging.getlogger ("Infologger") # "Application" Codeloggerinfo.debug ("debug Message") Loggerinfo.info ("info message") Loggerinfo.warn ("Warn message") Loggerinfo.error ("error message") loggerinfo.critical ("critical message") Loggererror = Logging.getlogger (" Errorlogger ") loggererror.error (" Error:hello world! ")
#coding =utf-8import loggingimport datetimeformat= '% (asctime) s-% (filename) S-[line:% (Lineno) d]-% (levelname) s-% ( Message) s ' curdate = Datetime.date.today ()-Datetime.timedelta (days=0) infologname = R ' c:/users/june/desktop/info_% S.log '%curdateerrorlogname = R ' c:/users/june/desktop/error_%s.log '%curdateformatter = logging. Formatter (format) Infologger = Logging.getlogger ("InfoLog") Errorlogger = Logging.getlogger ("errorlog") Infologger.setlevel (Logging.info) errorlogger.setlevel (logging. ERROR) Infohandler = logging. Filehandler (Infologname, ' a ') infohandler.setlevel (logging.info) infohandler.setformatter (formatter) ErrorHandler = Logging. Filehandler (Errorlogname, ' a ') errorhandler.setlevel (logging. ERROR) Errorhandler.setformatter (formatter) Testhandler = logging. Streamhandler () Testhandler.setformatter (formatter) testhandler.setlevel (logging. ERROR) Infologger.addhandler (Infohandler) Infologger.addhandler (Testhandler) Errorlogger.addhandler (ErrorHandler) #infoLogger. Debug ("Debug Message") #infoLogger.info ("info message") #infoLogger. Warn ("warn message") # # This line is printed at the same time on both the file and the terminal #infologger.error ("error Message") # # Errorlogger.error ("error message") #errorLogger. Critical ("critical message")
"' Created on August 18, 2016 @author:apple ' #-*-coding:utf-8-*-#开发出一个日志系统, both to output the log to the console and to write the log file import Loggingimport Timeimport osimport Os.pathclass Logger (): Def __init__ (self, Log_name, logger_name): " Specifies the file path to save the log, the log level, and the calling file to deposit the log into the specified file ' #设置日志文件名称: Time.time () gets the current time; Time.localtime () Get local time; Time.strftime () formatted date; time_str = Time.strftime ("%y_%m_%d_%h_%m_%s", Time.localtime (Time.time ())) Logn Ame = Time_str + ' _ ' + log_name + '. Log ' #设置日志文件所在的路径 Log_filedir = ' log ' if not OS.PATH.ISDI R (Log_filedir): Print ("Log folder%s does not exist, start creating this Folder"%log_filedir) Os.mkdir (' log ') Else: Print ("Log folder%s exists"%log_filedir) Os.chdir (' log ') #创建一个logger以及设置日志级别 #logging有6个日志级别 : NOTSET, DEBUG, INFO, WARNING, ERROR, critical the corresponding values are: 0,10,20,30,40,50 #例如: Logging. Debug and 10 are equivalent representations #可以给日志对象 (Logger Instance) Set the log level below that level of loggingThe message will be ignored, or you can set the log level #对于低于该级别的日志消息 for Hanlder, and handler will ignore it. Self.logger = Logging.getlogger (logger_name) self.logger.setLevel (logging. DEBUG) #创建文件handler to write to the log file and set the file log level File_handler = logging. Filehandler (logname) file_handler.setlevel (logging. DEBUG) #创建控制端输出handler for output to the control side and set the output log level Console_handler = logging. Streamhandler () Console_handler.setlevel (logging. DEBUG) #在控制端handler添加过滤器, the handler message containing chat or GUI output filter = logging. Filter ("Chat.gui") Console_handler.addfilter (filter) #定义handler的输出格式并将格式应用到handler Formatter = Logging. Formatter ('% (asctime) s-% (name) s-% (levelname) s-% (message) s ') File_handler.setformatter (Formatter) Console_han Dler.setformatter (Formatter) #将handler加入到logger Self.logger.addHandler (File_handler) Self.log Ger.addhandler (Console_handler) Self.logger.debug ("This is debug log information") Self.logger.Info ("Welcome everyone to Python's World") #将handler从logger中移除 Self.logger.removeHandler (File_handler) Self.logger.removeHandler (console_handler) if __name__ = = ' __main__ ': Print (OS.GETCWD ()) Log = Logger (' Create_ Log ', "Chat.gui.statistic") # module-level function # # Logging.getlogger ([name]): Returns a Logger object if no name specified will return root logger# logging.debug (), Logging.info (), logging.warning (), Logging.error (), logging.critical (): Set the log level of root logger # logging.basicconfig () : Establish a Streamhandler for the logging system with the default formatter, set the base configuration and add it to root Logger # loggers# # logger.setlevel (lel): Specify the lowest log level, and the level below LEL will be ignored. Debug is the lowest built-in level, critical is the highest # logger.addfilter (filt), Logger.removefilter (filt): Adds or removes the specified filter# logger.addhandler ( HDLR), Logger.removehandler (HDLR): Adds or deletes the specified handler# logger.debug (), Logger.info (), logger.warning (), Logger.error (), Logger.critical (): Can set the log level # handlers# # Handler object is responsible for sending related information to the specified destination. Multiple handler# handler.setlevel (LEL) can be added through the AddHandler () method: Specifies the level of information being processed, and information below the LEL level is ignored # handler.setformatter () : Choose a lattice for this handler# Handler.addfilter (filt), Handler.removefilter (filt): Add or remove a Filter object # formatters# # Formatter Object Set the last rule, structure, and content of the log information, The default time format is%y-%m-%d%h:%m:%s, below is some information commonly used by formatter #% (name) s logger name # #% (Levelno) s The log level in number form # #% (levelname) s text form the log level # #% (pathname) s The full pathname of the module that calls the log output function, possibly without the # #% (filename The file name of the module that calls the log output function # #% (module) s call log output function module name # #% (funcName) s call log output letter Number of function name # #% (Lineno) d Call log output function statement in line # #% (created) F current time, represented by UNIX standard floating-point number of Time # # % (relativecreated) d when the log information is output, the current time as a string of # #% (Asctime) s as the number of milliseconds since logger was created. The default format is "2003-07-08 16:49:45,896". The comma is followed by a millisecond # #% (thread) d thread ID. There may not be a # #% (threadname) s thread name. There may not be a # #% (process) d ID. May not have # #% (message) s user output messages
"' Created on August 25, 2016 @author:apple ' ' Import logginglogging.basicconfig (level=logging.info, format= '% ( Asctime) s% (name) s% (levelname) s% (message) s ', datefmt= '%m-%d%h:%m ', filename= './autoupdate.log ', Filemode= ' W ') console = logging. Streamhandler () console.setlevel (logging.info) formatter = logging. Formatter ('% (name) s:% (levelname) -8s% (message) s ') Console.setformatter (Formatter) Logging.getlogger ('). AddHandler (console) logging.info ("Hello world!")