First, the text into binary
1. Turn #alex into binary form according to ASCII table
2, the computer how to distinguish between the section of the #, which is the code a
Second, the computer capacity unit
Since the string is long and short, it is difficult to distinguish the starting and ending position of each character, since ASCII is 255 characters, the longest is just 111,111,118 bits, instead of all binary
Converted to 8-bit, not enough to replace with a.
Each of the 0 or 1 space units is bit (bit), which is the smallest representation unit in the computer
Three, character encoding
In order to solve the problem of non-interoperability between different codes in each country, ISO standards organization!
Unicode encoding: International standard character Set, he defines a unique encoding for each character of the world's languages to accommodate cross-language, cross-platform textual information transformations. Unicode (Uniform Code, Universal Code) stipulates that all characters and symbols are represented by a minimum of 16 bits (2 bytes), i.e. 2**16=65536;
UTF-8, which is compression and optimization of Unicode encoding, does not use a minimum of 2 bytes in use, but instead classifies all characters and symbols: the contents of the ASCI code are stored in 2 bytes with 1 bytes of characters, and the East Asian characters are saved in three bytes;
The default encoding for the Chinese version of Windows system is GBK
Mac OS \ Linux system default encoding is UTF8
Python 2.x default encoding is ASCII
Python 3.X default encoding is UTF-8
Four, floating point number
A floating-point number is a representation of a number in a given subset of a rational number, which is used to approximate any real numbers in a computer. Specifically, this real number is obtained by an integer or fixed-point number (that is, the mantissa) in good faith for the entire power of a certain cardinality (10**4,10), which is similar to the scientific notation of cardinality 10.
Five, floating point accuracy problem
The floating-point number of an integer is stored inside the computer in a different way, and the integer operation is always accurate and the floating-point or operation may have rounding errors.
Python defaults to 17-bit precision, which is 16 digits after the decimal point, but the accuracy is not as accurate as it gets. This problem does not refer to the existence of Python, and other languages have the same problem.
The reason is related to floating-point storage structures.
Calculate a high-precision floating-point method
Vi. Data Type-list
Python data types and character encodings