Reference: http://blog.csdn.net/zklth/article/details/11829563
Hadoop processing GBK text, found that the output is garbled, the original Hadoop is involved in encoding is written dead UTF-8, if the file encoding format is other types (such as GBK), it will appear garbled.
When you simply read the text in the Mapper or reducer program, use TransformTextToUTF8 (text, "GBK"), and transcode to ensure that it is running in UTF-8 encoding.
public static text transformTextToUTF8(text text, String Encoding) { String Value = null; try { value = new String(text. GetBytes(), 0, text. GetLength(), encoding); } catch (unsupportedencodingexception e) { e. Printstacktrace(); }return new Text(value); }
Here the core code is: String Line=new string (Text.getbytes (), 0,text.getlength (), "GBK"); The value here is the text type
If you use String line=value.tostring () directly; Will output garbled, this is caused by the text of this writable type. When I was a beginner, I always thought that the text type is the writable encapsulation of string, just like longwritable for long. But there are some differences between text and string, which is a UTF-8 format of writable, and string in Java is a Unicode character. So the direct use of the value.tostring () method, the default character is UTF-8 encoded, so the original GBK encoded data using the text read into the direct use of the method will become garbled.
The correct method is to convert the value of the input text type to a byte array (value.getbytes ()), using the string constructor string (byte[] bytes, int offset, int length, Charset CharSet), constructs a new string by decoding the specified byte subarray with the specified charset.
If you need to map/reduce output data from other encoded formats, you need to implement OutputFormat yourself, specifying the encoding in which you cannot use the default Textoutputformat.
The problem of Hadoop coding, the conversion garbled of Tex and string in MapReduce