The flexibility and configurability of logging in Python is introduced.

Source: Internet
Author: User

The flexibility and configurability of logging in Python is introduced.

The worst case for a developer is to figure out why an unfamiliar application does not work. Sometimes you don't even know whether the system is running or not.

Applications running online are black boxes and need to be tracked and monitored. The simplest and most important way is to record logs. Recording logs allow us to send information during system operation while developing software. This information is useful for us and our system administrators.

Just like writing code documents for future programmers, we should make the new software generate enough logs for system developers and administrators. Logs are a key part of the system file about the application running status. When you add a log to the software to generate a sentence, you must write the same document to the developer and Administrator of the future maintenance system.

Some purists believe that a trained developer rarely needs an interactive debugger when using logs and tests. If we cannot use detailed logs to explain the applications in the development process, it is more difficult to explain the code when it runs online.

This article introduces the logging module of Python, including its design and applicable methods for more complex cases. This article is not a document written to developers. It is more like a guide to illustrate how Python's logging templates are built and people who are interested can study them in depth.

Why use the logging module?

Some developers may ask why it is not a simple print statement? The Logging module has many advantages, including:

1. multithreading support

2. Log classification at different levels

3. Flexibility and configurability

4. How to separate logs from recorded logs

Finally, we truly separated the recorded content from the record method, ensuring cooperation between different parts of the software. For example, it allows developers of a framework or library to add logs and allows system administrators or persons responsible for running configurations to decide what to record later.

What is in the Logging module?

The Logging module perfectly separates the responsibilities of each part of it (following the Apache Log4j API method ). Let's take a look at how a log line uses the code of this module and study its different parts.

Logger)

A recorder is an object that developers often interact. The main APIs describe what we want to record.

For example, a recorder can categorize requests to send a message without worrying about how they are sent.

For example, when we writelogger.info(“Stock was sold at %s”, price) We have the following modules in our minds:

We need a line. Suppose some code runs in the recorder to make this line appear in the console or file. But what happened internally?

Log records

The logging module is a package used to meet all requirements. They contain information such as the location where logs need to be recorded, the changed strings, parameters, and the request information queue.

They are all recorded objects. These objects are generated every time we call the recorder. But how are these objects serialized into the stream? Via processor!

Processor

The processor sends log records to other output terminals. They obtain log records and process them using related functions.

For example, a file processor will obtain a log record and add it to the file.

The standard logging module already has a variety of built-in processors, such:

Multiple File processors (TimeRotated, SizeRotated, and Watched) can be written to files.

1. StreamHandler outputs the Target stream, such as stdout or stderr.

2. SMTPHandler sends log records by email

3. SocketHandler sends the log file to the stream socket

4. SyslogHandler, NTEventHandler, HTTPHandler, and MemoryHandler

At present, we have a model similar to the actual situation:

Most processors are processing strings (SMTPHandler and FileHandler ). Maybe you want to know how these structured log records are converted into bytes that are easy to serialize.

Formatter

The formatter is responsible for converting rich metadata log records into strings. If nothing is provided, there will be a default formatter.

The general formatter class is provided by the logging library and uses the template and style as the input. Then the placeholder can declare all attributes in a LogRecord object.

For example, '% (asctime) s % (levelname) s % (name) s: % (message) s' will generate a log similar to 15:31:13, 942 INFO parent. child: Hello EuroPython.

Note: attribute information is the result of interpolation of the original log template using the provided parameters. (For example, for logger.info ("Hello % s", "Laszlo"), this information will be "Hello Laszlo ")

All default attributes can be found in the log document.

Now that we understand the formatter, our model has changed again:

Filter

The last object of our log tool is the filter.

Filters allow fine-grained control over the logs to be sent. Multiple filters can be applied to recorders and processors at the same time. For a sent log, all filters should pass this record.

Users can declare their own filters as objects, use the filter method to obtain log records as input, and feedback True/False as output.

For this reason, the following is the current log workflow:

Recorder level

At this time, you may be impressed by a lot of complicated content and cleverly hidden module configurations, but there is more to consider: Recorder layering.

We can uselogging.getLogger() Create a recorder. This character passes a parameter to getLogger, which can be defined by a dot-separated element.

For example,logging.getLogger(“parent.child”) A "child" recorder will be created. Its parent-level recorder is called "parent. the recorder is a global object managed by the logging module, so we can conveniently retrieve them anywhere in the project.

Recorder examples are also considered as channels. Layers allow developers to define channels and Their layers.

When a log is transferred to all the processors in the recorder, the parent processor performs recursive processing until we reach the top-level recorder (defined as an empty string ), or one recorder sets propagate = False. We can see from the updated figure:

Note that the parent recorder is not called, and only its processor is called. This means that the filter and other code in the recorder class will not be executed in the parent level. This is usually a trap when we add a filter to the recorder.

Workflow Summary

We have already clarified the division of duties and how we fine-tune log filtering. However, there are two other attributes that we did not mention:

1. the recorder may be incomplete, so that no records can be sent from this.

2. A valid level can be set in both the recorder and the processor.

For example, if a recorder is set to the INFO level, only the INFO level and above will be passed. The same rule applies to the processor.

Based on all the above considerations, the flowchart of the final log record looks like this:

How to Use the logging Module

Now we have learned about the part and design of the logging module. It is time to understand how a developer interacts with it. The following is a code example:

import loggingdef sample_function(secret_parameter):logger = logging.getLogger(__name__) # __name__=projectA.moduleBlogger.debug("Going to perform magic with '%s'", secret_parameter)...try:result = do_magic(secret_parameter)except IndexError:logger.exception("OMG it happened again, someone please tell Laszlo")except:logger.info("Unexpected exception", exc_info=True)raiseelse:logger.info("Magic with '%s' resulted in '%s'", secret_parameter, result, stack_info=True)

It creates a logger with module _ name. It creates channels and levels based on the project structure, just as the Pyhon module connects with dots.

The recorder variable references the recorder's "module" and uses "projectA" as the parent level, and "root" as the parent level.

On the fifth line, we can see how to execute the call to send logs. We can use one of the debug, info, error, or critical methods to record logs at an appropriate level.

When a piece of information is recorded, apart from the template parameters, we can pass the Password parameters with special meanings. The most interesting ones are exc_info and stack_info. They will add information about the current exception and stack frame respectively. For convenience, there is a method exception in the recorder object, just as this error calls exc_info = True.

These are the basis for how to use the recorder module, but some practices that are generally considered to be undesirable are also worth noting.

Overformatted string

Avoid using it whenever possible loggger.info(“string template {}”.format(argument)) If possible, try to uselogger.info(“string template %s”, argument). This is a better practice, because the string changes only when the log is sent. When the record level is above INFO, doing so will lead to a waste of cycles, because this change will still happen.

An error occurred while capturing and formatting.

Generally, we want to record the log information of module exceptions. If this is done, it will be intuitive:

try:..except Exception as error:logger.info("Something bad happened: %s", error)

But this code will show us something similar Something bad happened: “secret_key.” Is not very useful. If we use exc_info as our description, it will be shown as follows:

try:..except Exception:logger.info("Something bad happened", exc_info=True)Something bad happenedTraceback (most recent call last):File "sample_project.py", line 10, in codeinner_code()File "sample_project.py", line 6, in inner_codex = data["secret_key"]KeyError: 'secret_key'

This will not only contain accurate abnormal resources, but also its type.

Set Recorder

Installing our software is simple. We need to set up the log stack and develop how these records are issued.

The following methods are used to set the log Stack:

Basic settings

This is the simplest way to set logging so far. Use logging. basicConfig (level = "INFO") to build a basic StreamHandler. This will record everything on INFO and go to the console or above. Below are some parameters for writing Basic settings:

Note that basicConfig can be called only at the beginning of the operation. If you have set your root recorder, calling basicConfig will not work.

Dictionary settings

The settings of all elements and how to connect them can be described as dictionaries. This dictionary should consist of different parts, including the recorder, processor, formatting, and some basic common parameters.

Example:

config = {'disable_existing_loggers': False,'version': 1,'formatters': {'short': {'format': '%(asctime)s %(levelname)s %(name)s: %(message)s'},},'handlers': {'console': {'level': 'INFO','formatter': 'short','class': 'logging.StreamHandler',},},'loggers': {'': {'handlers': ['console'],'level': 'ERROR',},'plugins': {'handlers': ['console'],'level': 'INFO','propagate': False}},}import logging.configlogging.config.dictConfig(config)

When referenced, dictConfig disables all running recorders unless disable_existing_loggers is set to false. This is usually required because many modules declare a global recorder which will be instantiated before dictConfig is called.

You can view the schema that can be used for the dictConfig method (link ). Generally, these settings are stored in a YAML file and set from there. Many developers tend to use this method instead of using fileConfig (Link) because it provides better support for customization.

Expand logging

Thanks to the design, it is easy to expand the logging module. Let's take a look at some examples:

Logging JSON | record JSON

As long as we want to record, we can create a custom format to record JSON, which will convert the log record into a JSON encoded string.

import loggingimport logging.configimport jsonATTR_TO_JSON = ['created', 'filename', 'funcName', 'levelname', 'lineno', 'module', 'msecs', 'msg', 'name', 'pathname', 'process', 'processName', 'relativeCreated', 'thread', 'threadName']class JsonFormatter:def format(self, record):obj = {attr: getattr(record, attr)for attr in ATTR_TO_JSON}return json.dumps(obj, indent=4)handler = logging.StreamHandler()handler.formatter = JsonFormatter()logger = logging.getLogger(__name__)logger.addHandler(handler)logger.error("Hello")

Add more context

In formatting, we can specify the attributes of any log record.

We can add attributes in multiple ways. In this example, we use filters to enrich log records.

import loggingimport logging.configGLOBAL_STUFF = 1class ContextFilter(logging.Filter):def filter(self, record):global GLOBAL_STUFFGLOBAL_STUFF += 1record.global_data = GLOBAL_STUFFreturn Truehandler = logging.StreamHandler()handler.formatter = logging.Formatter("%(global_data)s %(message)s")handler.addFilter(ContextFilter())logger = logging.getLogger(__name__)logger.addHandler(handler)logger.error("Hi1")logger.error("Hi2")

In this way, an attribute is effectively added to all log records, which can be passed through the recorder. Formatting will include this attribute in the log line.

Please note that this will affect all the log records in your application, including the database that you may use and the logs you send and other frameworks. It can be used to record an independent Request ID similar to that in all log lines, to track requests or to add additional context information.

Starting from Python 3.2, you can use setLogRecordFactory to obtain all log creation records and add additional information. This extra attribute and LoggerAdapter class may be equally interesting.

Buffer logs

Sometimes, when an error occurs, we want to troubleshoot the log. It is feasible to create a buffer processor to record the latest fault information when an error occurs. The following code is an example of non-human planning:

import loggingimport logging.handlersclass SmartBufferHandler(logging.handlers.MemoryHandler):def __init__(self, num_buffered, *args, **kwargs):kwargs["capacity"] = num_buffered + 2 # +2 one for current, one for prepopsuper().__init__(*args, **kwargs)def emit(self, record):if len(self.buffer) == self.capacity - 1:self.buffer.pop(0)super().emit(record)handler = SmartBufferHandler(num_buffered=2, target=logging.StreamHandler(), flushLevel=logging.ERROR)logger = logging.getLogger(__name__)logger.setLevel("DEBUG")logger.addHandler(handler)logger.error("Hello1")logger.debug("Hello2") # This line won't be loggedlogger.debug("Hello3")logger.debug("Hello4")logger.error("Hello5") # As error will flush the buffered logs, the two last debugs will be logged

Summary

The above section describes the flexibility and configurability of logging in Python. I hope it will be helpful to you. If you have any questions, please leave a message, the editor will reply to you in a timely manner. Thank you very much for your support for the help House website!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.