Python logging Best Practices
Cause
I often joke with my colleagues. I said that in a company, I can clearly understand the log function. So I wrote an article to share some of my knowledge.
The official Python article is the clearest among the log documents I have seen currently.
This flowchart is very important. I hope you can take a closer look.
1. Applicable scenarios 1.1 General scenarios
In general, we usually use two handler types:
RotatingFileHandler
Split the log TimedRotatingFileHandler by the specified file size
Split logs by Time
The two handler mentioned above are thread-safe and can be used in multi-threaded scenarios. For multi-process scenarios, you need to consider other methods for more than 1.2 processes.
For multi-process scenarios, we recommend using the official python documentation
SocketHandler
Use TCP data transmission logs
Refer to my article python log Collection Server mongoramhandler
Use UDP to transmit logs
On a single machine, that is, the log sending client and the log Collection server are on the same machine.
| Type |
Record/second |
| TCP |
6000 |
| TCP |
9000 |
You can also consider the following two methods:
1. python-logstash
Use logstash to open the UDP or TCP Service port of logstash to accept data directly. The received data can be imported to ElasticSearch or directly output to the file.
2. Customize the new handler to store log records in a queue of redis, and then start an additional process to read data from redis into the file.
2. Log Level
| Level |
Numeric value |
| CRITICAL |
50 |
| ERROR |
40 |
| WARNING |
30 |
| INFO |
20 |
| DEBUG |
10 |
| NOTSET |
0 |
Generally, you only need to divide the log file into two parts.
1. all. log
Storage> = INFO-level logs
The test environment can be opened to the DEBUG level.
Tracking business processes, etc.
2. error. log
Store> = ERROR-level logs
Quick troubleshooting
A common log initialization module can be written in this way.
Logger_helper.py
import loggingimport sysimport timefrom logging import Loggerfrom logging.handlers import TimedRotatingFileHandlerdef init_logger(logger_name): if logger_name not in Logger.manager.loggerDict: logger = logging.getLogger(logger_name) logger.setLevel(logging.DEBUG) # handler all handler = TimedRotatingFileHandler('./all.log', when='midnight',backupCount=7) datefmt = %Y-%m-%d %H:%M:%S format_str = [%(asctime)s]: %(name)s %(levelname)s %(lineno)s %(message)s formatter = logging.Formatter(format_str, datefmt) handler.setFormatter(formatter) handler.setLevel(logging.INFO) logger.addHandler(handler) # handler error handler = TimedRotatingFileHandler('./error.log', when='midnight',backupCount=7) datefmt = %Y-%m-%d %H:%M:%S format_str = [%(asctime)s]: %(name)s %(levelname)s %(lineno)s %(message)s formatter = logging.Formatter(format_str, datefmt) handler.setFormatter(formatter) handler.setLevel(logging.ERROR) logger.addHandler(handler) logger = logging.getLogger(logger_name) return loggerlogger = init_logger(dataservice)if __name__ == '__main__': logger = init_logger('dataservice') logger.error(test-error) logger.info(test-info) logger.warn(test-warn)
You only need to introduce the logger of this module to other modules.
Model. py
from logger_helper import loggerdef business_code(): # ... ... logger.info(...)# call business codebusiness_code()