Many articles have mentioned import codecs. Indeed!
But some only consider UTF-8 when processing Unicode, or a simple UTF-16.
But it is easy to report errors when using the UTF-16, the reason is that the UTF-16 will detect BOM (byte order mark) by default, if some files are not created, then Python cannot read normally, this is to consider utf_16_le and so on.
------------------------------------
Utf_16, utf_16_le, utf_16_be, utf_8
Reference: http://blog.csdn.net/skeleton703/article/details/8433375 and http://blog.csdn.net/pkrobbie/article/details/1451437
The open functions of the python core library are designed in ASCII format. However, we are increasingly faced with Unicode files. Fortunately, Python provides the codecs module to help us solve this problem. There are some precautions for use.
The open definition of the codecs module is as follows:
Open (filename, mode [, encoding [, errors [, buffering]) open an encoded file using the given mode and return a wrapped version providing transparent encoding/decoding.
The first two parameters filename and mode are the same as the default open. The third parameter encoding is the key and the file encoding method is developed.
The commonly used Unicode types include utf_16, utf_16_le, utf_16_be, and utf_8. Each type has some available aliases. You can find Python manual.
The difference between the utf_16, utf_16_le, and utf_16_be parameters is as follows.
If utf_16 is specified, Python checks the BOM (byte order mark) of the file to determine whether the file type is utf_16_le or utf_16_be. An error is reported for files without Bom.
If utf_16_le and utf_16_be are specified directly, python will not check the BOM. It is useful for files without Bom. However, for files with Bom, it should be noted that it will read the BOM as the first character