This article intends to track the binary format of UTF-8 characters with a C ++ program. Feel the application of UTF-8 in practice.
The development environment is ubuntu12.04 32bit OS. GCC 4.6.3, and the system byte sequence is little endian.
If there is a Chinese character '1', first through a website tool: http://rishida.net/tools/conversion/ can find its Unicode code is: 0x4e00
After 0x4e00 is encoded with a UTF-8: e4 B8 80, three bytes.
The following code is used to print the binary code:
# Include "test. H "# include" util/endian. H "# include" util/UTF. H "# include <iostream> using namespace STD; int main (INT argc, char ** argv) {// test (3> 2 ); char const * P = "1"; cout <printstringasbinarystring (p) <Endl; string STR = "1"; cout <printstringasbinarystring (STR) <Endl; cout <islittleendian () <Endl ;}
Implementation Code of two functions in UTF. h:
#ifndef UTIL_UTF_H_#define UTIL_UTF_H_#include "util/endian.h"string PrintStringAsBinaryString(char const* p) { stringstream stream; for (size_t i = 0; i < strlen(p); ++i) { stream << PrintIntAsBinaryString(p[i]); stream << " "; } return stream.str();}string PrintStringAsBinaryString(string const& str) { stringstream stream; for (size_t i = 0; i < str.size(); ++i) { stream << PrintIntAsBinaryString(str[i]); stream << " "; } return stream.str();}#endif
The code for ptintintasbinarystring is:
// T must be one of integer typetemplate<class T>string PrintIntAsBinaryString(T v) { stringstream stream; int i = sizeof(T) * 8 - 1; while (i >= 0) { stream << Bit_Value(v, i); --i; } return stream.str();}
// Get the bit value specified by the index// index starts with 0template<class T>int Bit_Value(T value, uint8_t index) { return (value & (1 << index)) == 0 ? 0 : 1;}
The result is as follows:
11100100 10111000 10000000
Exactly E4 B8 80
Here we can see that leading byte is the highest byte E4, which is stored in the starting address of the memory referred to by char const * P. Therefore, it can be seen that it is big endian, that is, in the system, the default test should adopt the UTF-8 be encoding.
The following continues to fight, convert the UTF-8 to Unicode code, that is, the Code Point. Note: code_point Type Definition
typedef uint32_t code_point
Now let's modify the test code and introduce the boost: locale library. You can write this algorithm yourself, but the time is short. Use a mature Library first.
# Include "test. H "# include" util/endian. H "# include" util/UTF. H "# include <iostream> # include <boost/locale/UTF. HPP> using namespace STD; using namespace boost: locale: UTF; int main (INT argc, char ** argv) {// test (3> 2 ); char const * P = "1"; cout <printstringasbinarystring (p) <Endl; string STR = "1"; cout <printstringasbinarystring (STR) <Endl; code_point c = utf_traits <char, sizeof (char) >:: decode (p, p + 3); cout <"code point: 0x" <STD :: hex <C <"binary format: B" <printintasbinarystring (c) <Endl ;}
The penultimate line of code calls decode for decoding.
The result is:
code point: 0x4e00 binary format:B00000000000000000100111000000000
Ideal. You can also pass string: iterator as the parameter, but note that string: Begin () cannot be directly used as the parameter, but should be like this:
string::iterator itor = str.begin();utf_traits<char, sizeof(char)>::decode(itor, str.end());
Because the first parameter of decode is a reference, the decode performs the ++ operation internally. String: Begin () cannot be changed. Therefore, an error is reported during compilation.