The students who use URLConnection to crawl should use this right from the start. OK, back to the chase ...
In the content of Chinese, English has been normal display, there will still be some Chinese or English garbled, this is why? This problem has been hovering in the heart ... Real TM's Egg hurts ....
There is a solution on the Internet, in fact, this with you to start learning with the use of Java information.
Most of the online code is as follows:
Byte[] by = new byte[20000]; StringBuffer strbuffer = new StringBuffer (); int len = 0;while (len = Urlstream.read (by, 0, by.length))! =-1) {Strbuffer. Append (New String (by, 0, Len, "Utf-8"));
This type of writing causes the characters in the captured code to appear garbled.
To analyze the reason: truncation of the data stream (by the maximum length of the array is 20000), and then transcoding into the StringBuffer. This inevitably leads to garbled content.
So how do we solve this problem?
BufferedReader reader=new BufferedReader (New InputStreamReader (UrlStream, "utf-8")); StringBuffer strbuffer = new StringBuffer (); String Line=null;while ((Line=reader.readline ())!=null) {strbuffer.append (line);}
The data stream is first transcoded and then added to the StringBuffer .... There will be no truncation problem, there will be no garbled, garbled is the data is truncated, the original good two bytes are truncated to only one byte, but also to decode, not garbled what will be?
Little friends, do you understand now?
For more blogs, see here: http://www.cnblogs.com/jackicalSong/
Java uses URLConnection to crawl part of the data garbled