Conversion between Char *, CString, and WCHAR *
If you want to pass a string as a parameter to all interface functions of the GDI + class, it seems that the UNICODE string is used, that is, WCHAR *. I was also confused at the beginning, because Window Programming is often used as CString, and reading file data with IO streams produces char *. Thanks to the summary from the Internet, I use the following basic methods to achieve the conversion between the three:
Char * To WCHAR *:
: MultiByteToWideChar (CP_ACP, 0, (const char *) res, int count, char * dest, int count );
Similarly, WCHAR * is converted to char *:
WideCharToMultiByte (CP_ACP, 0 ,.........);
CString to WCHAR *:
Wchar_t * p = str. AllocSysStrinig ()
There are also A2W (str) files, but they must include the ATL conversion header file # include;
USES_CONVERSION macro is used before A2W.
Others:
Char * To CString:
In addition to direct value assignment, you can also use CString: Format.
For example, char * p = "sfdasf ";
CString str = p; or str. Format ("% s", p );
Convert CString to char *
1. Direct forced type conversion:
CString ss = "sfasf ";
Char * p = (LPSTR) (LPCSTR) ss;
2. CString: GetBuffer or LockBuffer
Char * p = str. GetBuffer ();
Char * pt = str. LockBuffer ();
WCHAR * To CString
No relevant documents are found on the Internet. You can assign values directly.
However, the experiment found that although there was no compilation error, it was normal to use Chinese Characters in case of garbled characters and letters, it is only because garbled characters are displayed in MessageBox. For example, DBCS is used to display Chinese characters (it is purely a guess ). In general, in Windows Programming: # define UNICODE
The CString. TCHAR, and so on all use UNICODE codes. One character occupies two bytes.