A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
My readers know that I am a man who likes to scold Python3 Unicode. This time is no exception. I will tell you how painful it is to use Unicode and why I can't shut up. It took me two weeks to study Python3, and I needed to vent my disappointment. In these scolding, there is still useful information, because it teaches us how to deal with Python3. If I'm not bothered by it, read it.
The contents of this spit will be different. does not correlate to WSGI or HTTP and its associated objects. Normally, I was told I should stop complaining about the Python3 Unicode system, because I don't write code that people often write (HTTP libraries, etc.), so I'm going to write something else this time: a command-line application. I wrote a very convenient library called Click to make it easier to write it.
Note that what I do is what every novice python programmer does: write a command-line application. Hello World Program. But unlike ever before, I wanted to make sure that the application was stable and supported for both Python2 and Python3 Unicode, as well as unit testing. So the next step is how to implement it.
What do we want to do?
We need to use Unicode well as a developer in Python3. Obviously, I think this means that all the text data is Unicode and all the non text data is byte. In such a wonderful world all things are only black and white, and the example of Hello worlds is very straightforward. So let's write some shell tools.
This is an application implemented in the form of Python2:
Import sys import shutil for filename in sys.argv[1:: f = sys.stdin if filename!= '-': try: f = Open (filename, ' RB ') except IOError as err: print >> sys.stderr, ' cat.py:%s:%s '% (filename, err) con Tinue with F: Shutil.copyfileobj (F, sys.stdout)
Obviously, the command is not particularly good at handling any command-line options, but it can be used at least. So let's start coding code.
Unicode in Unix
The code above is not going to work in Python2 because you're dealing with bytes in the dark. The command line arguments are bytes, the file name is byte, and the file content is byte. The language guard will point out that this is wrong, which can cause problems, but if you start thinking more about it, you'll find that it's an unfixed problem.
UNIX is a byte that has been defined as this, and always will be. To understand why you need to observe different scenarios for data transmission.
By the way, this is not the only thing that the data might pass through, but let's see how many scenarios we can learn about an encoding. The answer is not one. At least we need to understand that an encoding is a terminal output area information. This information can be used to show transformations and to understand the encoding that text messages have.
For example, if the value of Lc_ctype is En_us.utf-8 tells the application system to use US 中文版, and most of the text data is UTF-8 encoded. There are actually a lot of other variables, but we assume that's the only thing we need to see. Note that LC_CTYPE does not mean that all data is utf-8 encoded. It replaces how a notification application classifies text attributes and when it needs to apply transformations.
This is important because of C locale. C locale is the only field specified by POSIX, which says that all ASCII encodings and replies from command-line tools are treated as defined in the POSIX spec.
In the Cat tool above, if it is bit, there is no other way to treat the data. The reason is that the shell does not specify what this data is. For example, if you call Cat Hello.txt, the terminal encodes the hello.txt when it encodes the application.
But now think of this example echo *. The shell will pass all the file names of the current directory to your application. So, what are they coded? File name is not encoded!
Now a guy with Windows would say, "What's with the Unix people?" But that's not tragic. The reason for these jobs is that some smart people have designed the system to be backwards compatible. Unlike windows, which defines each API two times, the best way to do this on POSIX is to assume it as a byte for display purposes and encode it in the default encoding.
Use the cat command above to illustrate. For example, there is an error message about files that cannot be opened because they do not exist, or they are protected, or any other reason. We assume that the file is encoded using latin1 because it is from the external driver of 1995. The terminal gets the standard output, and it will try to encode it with Utf-8 because that's what it thinks. Because the string is latin1 encoded because it cannot be decoded smoothly. But not afraid, there will be no collapse, because your terminal can not handle it will ignore it.
What's it like on the graphical interface? There are two versions per type. List all the files in a graphical interface like Nautilus. It associates the file name with the icon, can double-click and tries to make the file name appear, and decodes it. For example, it will try to use Utf-8 decoding, the wrong place with the problem mark to replace. Your file name may not be fully readable, but that is you can still open the file.
Unicode on Unix can be crazy only when you force everything to use it. But that's not how Unicode works on UNIX. UNIX does not have an API that distinguishes between Unicode and byte. They are the same and make them easier to handle.
C locale is a lot of times here. C locale is a means to avoid POSIX specifications being forcibly applied anywhere. The POSIX compliant operating system needs to support setting Lc_ctype to allow everything to use ASCII encoding.
This locale is selected under different circumstances. You mainly find that this locale provides an empty environment for all programs initiated from Cron, your initialization program, and the subprocess. C locale a sound ASCII zone in the environment, or you can't trust anything.
But the word ASCII indicates that it is a 7bit encoding. This is not a problem, because the operating system is capable of processing bytes! Any 8bit based content can be processed normally, but you follow the Convention with the operating system, so character processing is limited to the first 7bit. Any information generated by your tool will be encoded in ASCII and used in English.
Note that the POSIX specification does not say that your application should die of flames.
Python3 died in flames.
Python3 chose a different position on Unicode than it did with UNIX. Python3 said: "Anything is Unicode (by default, unless it is in some cases, unless we send a duplicate encoded data, but even so, sometimes it is still Unicode, although it is the wrong Unicode)." The filename is Unicode, the terminal is Unicode,stdin and stdout is Unicode, and there are so many Unicode. Because UNIX is not unicode,python3 now the position is that it is right Unix is wrong and people should also modify the POSIX definition to add Unicode. In this case, the filename is Unicode, and the terminal is Unicode, so that you don't see some errors caused by the byte.
It's not just me saying that. These are the bugs that are caused by Python's brain-mutilation ideas about Unicode:
If you Google it, you can find so many slots. See how many people have failed to install the PIP module because of some characters in changelog, or because of the home folder, or because SSH sessions are ASCII, or because they are connected using putty.
Now start repairing cat for Python3. How do we do that? First, we need to deal with bytes, because something might show something that doesn't match the shell code. So anyway, the file content needs to be byte. But we also need to open the base output to allow it to support bytes, which by default are not supported. We also need to handle some cases separately, such as the Unicode API failure because the encoding is C. So this is the cat with the Python3 feature.
Import SYS import Shutil def _is_binary_reader (Stream, Default=false): Try:return isinstance (stream.read (0), byt ES) except Exception:return default Def _is_binary_writer (Stream, Default=false): Try:stream.write (b ") Except Exception:try:stream.write (") return False except Exception:pass return default Return True def Get_binary_stdin (): # Sys.stdin might or might not to binary in some extra. By # Default It's obviously non binary which is the core of the # problem but the docs recomend-changing it to binary For such # cases so we need to deal with it. Also someone might put # Stringio there for testing. Is_binary = _is_binary_reader (Sys.stdin, False) if Is_binary:return sys.stdin buf = getattr (Sys.stdin, ' buffer ', None) if BUF is not none and _is_binary_reader (buf, True): Return buf raise RuntimeError (' Did not manage to get bi Nary stdin ') def get_binary_stdout (): If _is_binary_writeR (Sys.stdout, False): return sys.stdout buf = GetAttr (sys.stdout, ' buffer ', None) if buf are not None and _is_binar Y_writer (BUF, True): Return buf raise RuntimeError (' Did don't manage to get binary stdout ') def filename_to_ui (value ): # The Bytes branch is unecessary for *this* script but otherwise # necessary as Python 3 still supports addressing The files by bytes # through separate APIs. If isinstance (value, bytes): value = Value.decode (sys.getfilesystemencoding (), ' replace ') Else:value = Value.en Code (' Utf-8 ', ' surrogateescape ') \. Decode (' Utf-8 ', ' replace ') return value binary_stdout = Get_binary_stdout () For filename in sys.argv[1:]: If filename!= '-': try:f = open (filename, ' RB ') except IOError as err: Print (' cat.py:%s:%s '% (filename_to_ui (filename), err), File=sys.stderr) continue Els E:f = Get_binary_stdin () with F:shutil.copyfileobj (F, binary_stdout)
This is not the worst version. Not because I want to complicate things, it's so complicated now. For example, what is not done in the example is to read a binary object that is forced to clean the text stdout. This is not necessary in this case because the print call here goes to stderr instead of stdout, but if you want to print some stdout, you have to clean it up. Why? Because stdout is a buffer on top of another buffer, your output order may be wrong if you do not force it to clean it up.
Not just me, for example: Twisted ' s compat module, will find the same trouble.
Jump Code Dance
To understand the command-line arguments in the shell, by the way, some of the worst things in Python3:
Here's what happened in Python2:
Because the string processing in the Python2 version is only corrected when the error occurs, because the shell can do a better job of displaying the filename.
Note that this does not make the script even more wrong. If you need to do the actual string processing of the input data, you should switch to Unicode processing in 2.x and 3.x. But in that case, you also want your script to support a-charset parameter, so work on 2.x and 3.x is similar. It's only going to get worse on 3.x, you need to build a binary standard output that you don't need on 2.x.
But you were wrong.
Obviously I was wrong and I was told this:
Do you know? I stopped complaining when I was working on the HTTP side, because I accepted the idea that a lot of HTTP/WSGI's problems were common to people. But what do you know? In the case of Hello World, there is the same problem. Maybe I should give up and get a high quality Unicode support library, and that's it.
I can refute the above argument, but in the end it doesn't matter. If Python3 is the only Python language I use, I will solve all the problems and use it to develop. There is a perfect another language called Python2, it has a larger user base and the user base is very solid. At this time I was very frustrated.
Python3 may be strong enough to start letting Unix walk through windows: Unicode is used in many places, but I doubt it.
The more likely thing is that people still use Python2 and use Python3 to do something awful. Or they'll use go. The language uses a model similar to Python2: Everything is a byte string. and assume that the code is UTF-8. To this end.
Start building with 50+ products and up to 12 months usage for Elastic Compute Service