1. Why encode because the CPU only knows the numbers
2.ASCII A character is a total of 7 bits, expressed in one byte, a total of 128 characters
3. So the ASCII wasted the highest bit much pity that appeared
iso-8859-1, a byte, 256 characters, the default encoding for many protocols
4. Chinese encoding
GB2132 two bytes, continental use, representing about 6k characters
BIG5 two bytes, traditional encoding standard, total 1.3w characters
GBK expands GB2132, can represent 2w characters, incompatible BIG5
Unicode
Also known as the Universal Code, from an organization, a total of two organizations, is to build a can represent all the characters of the Earth's code, one of which is Unicode,unicode is accurate is a character map, each character corresponds to a number, called Code point, compatible with acsii, that is, a corresponds to the number 96, For now, 16-bit lengths are not full, so some people say that Unicode characters account for two bytes, which is absolutely a misconception that Unicode simply defines which character corresponds to which number.
Java and Unicode
In Java, in order to store characters in the unified mapping relationship, store encoding-independent Unicode code points, or one will save a GBK character, another Big5 character, even the printing string is problematic.
Utf
So Unicode just defines the mapping relationship, how to store it, save it in a few bytes.
There are two ways of thinking about UCS and UTF at present.
Utf-8 Because of the savings in traffic, the internet used more
Use 1,2,3,4 bytes to store a character, usually English characters a byte, a man three bytes
Specific format Reference links
Uft16 and BOM
With 2.4 bytes of storage, so in order to distinguish high-byte before or after, you need to add a special BOM bytes before the byte stream, utf8 do not need BOM, but Microsoft has this habit.
More detailed description of recommended https://www.cnblogs.com/leesf456/p/5317574.html
Fast understanding of encoding, Unicode and Utf-8