The best solution for automatic identification of webpage encoding character sets in Webpage body extraction.

Source: Internet
Author: User
Tags intl
Yi er translation technology (http://www.12fanyi.cn) team in the past when doing Text Extraction often encounter because of different Web character set encoding, extraction of a lot of garbled code, now some articles collected for the novice reference, don't laugh.
   The first article is from universalchardet,In my excerpt, how does one identify the encoding used by a Web page?

First, the webpage or server directly reports the browser. What encoding is used for this page. For example, the Content-Type attribute of the HTTP header and the charset attribute of the page. This is easy to implement. You only need to detect these attributes to know what encoding is used.

Second, the browser automatically guesses. This is similar to artificial intelligence. For example, if some webpages do not write the charset attribute, We will manually select the page encoding when we see garbled characters on the page. If we find the code is garbled, we will change it until the display is normal.

Today, this article is about the second method. It uses a program to automatically guess the character set used by pages or files. The principle is to analyze the characteristics of the characters in statistics to determine which characters are the most common. Mozilla has a special article titled a composite approach to language/encoding detection. Well, the specific code is actually implemented by Mozilla using C ++, And the name is universalchardet, but I cannot find it even though I have searched the internet. net implementation class library, only Google Code has Java translation code. No way. Translate the code into C.
C # source code: http://code.google.com/p/nuniversalchardet/

Ps1. By the way, why is the title more accurate than IE? That's because IE also comes with the character set guessing function, some people have implemented the function library (http://www.codeproject.com/KB/recipes/DetectEncoding.aspx) by calling the IE interface to guess the character set, but I tried, the accuracy of this interface is not high, the chance of success is much lower than universalchardet.

Ps2. nchardet is widely circulated on the Internet. It is implemented based on C # Of chardet in earlier versions of Mozilla. The accuracy is also relatively low, which is similar to the interface success rate of IE.

PS3. references

Juniversalchardet: http://code.google.com/p/juniversalchardet/ (Java code has bugs in the big5prober and gb18030prober classes, C # has been fixed)

Principle reference: http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html

Article 2 from: [little tornado development diary] asynchronous pulling of HTML source code, automatic identification of webpage code, optimization of Intelligent Extraction engine of basic xpath
 

Mozilla adopts the encoding recognition module. Net C # version: nuniversalchardet

Using Mozilla. nuniversalchardet;

Public static string detectencoding_bytes (byte [] detectbuff)
{
Int ndetlen = 0;
Universaldetector det = new universaldetector (null );
// While (! Det. isdone ())
{
Det. handledata (detectbuff, 0, detectbuff. Length );
}
Det. dataend ();
If (Det. getdetectedcharset ()! = NULL)
{
Return det. getdetectedcharset ();
}

Return "UTF-8 ";
}

 

First, the webpage or server directly reports the browser. What encoding is used for this page. For example, the Content-Type attribute of the HTTP header and the charset attribute of the page. This is easy to implement. You only need to detect these attributes to know what encoding is used.

Second, the browser automatically guesses. This is similar to artificial intelligence. For example, if some webpages do not write the charset attribute, We will manually select the page encoding when we see garbled characters on the page. If we find the code is garbled, we will change it until the display is normal.

Today, this article is about the second method. It uses a program to automatically guess the character set used by pages or files. The principle is to analyze the characteristics of the characters in statistics to determine which characters are the most common. Mozilla has a special article titled a composite approach to language/encoding detection. Well, the specific code is actually implemented by Mozilla using C ++, And the name is universalchardet, but I cannot find it even though I have searched the internet. net implementation class library, only Google Code has Java translation code. No way. Translate the code into C.
C # source code: http://code.google.com/p/nuniversalchardet/

Ps1. By the way, why is the title more accurate than IE? That's because IE also comes with the character set guessing function, some people have implemented the function library (http://www.codeproject.com/KB/recipes/DetectEncoding.aspx) by calling the IE interface to guess the character set, but I tried, the accuracy of this interface is not high, the chance of success is much lower than universalchardet.

Ps2. nchardet is widely circulated on the Internet. It is implemented based on C # Of chardet in earlier versions of Mozilla. The accuracy is also relatively low, which is similar to the interface success rate of IE.

PS3. references

Juniversalchardet: http://code.google.com/p/juniversalchardet/ (Java code has bugs in the big5prober and gb18030prober classes, C # has been fixed)

Principle reference: http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html

Article 2 from: [little tornado development diary] asynchronous pulling of HTML source code, automatic identification of webpage code, optimization of Intelligent Extraction engine of basic xpath
 

Mozilla adopts the encoding recognition module. Net C # version: nuniversalchardet

Using Mozilla. nuniversalchardet;

Public static string detectencoding_bytes (byte [] detectbuff)
{
Int ndetlen = 0;
Universaldetector det = new universaldetector (null );
// While (! Det. isdone ())
{
Det. handledata (detectbuff, 0, detectbuff. Length );
}
Det. dataend ();
If (Det. getdetectedcharset ()! = NULL)
{
Return det. getdetectedcharset ();
}

Return "UTF-8 ";
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.