The perfect solution to python's collection of Chinese garbled characters,
In the last few days, when a webpage was collected, most of the webpages were OK, and a small number of webpages encountered garbled characters. After debugging for a few days, I finally found that it was caused by some illegal characters.
1. Under normal circumstances... can be used
Import chardetthischarset = chardet. detect (strs) ["encoding"]
To obtain the encoding method of the file or page.
Or directly capture the charset = xxxx of the page to get it.
2. If the content contains special characters, the specified encoding will cause garbled characters... That is, the content is caused by invalid characters. You can use encoding to ignore invalid characters.
Strs = strs. decode ("UTF-8", "ignore"). encode ("UTF-8 ")
The second parameter of decode indicates the method used when an invalid character is encountered.
This parameter throws an exception by default.
The above is all about the perfect solution for collecting Chinese garbled characters in python. I hope it will be helpful to you and support more customers ~