Node crawler solves Web page encoding as gb2312 result for garbled method

Source: Internet
Author: User

The recent need for a fire site to promote the intensity of the statistics, the use of the general HTTP module for data capture when the results are garbled, look at the original site only to find that the fire site is gb2312 code, and the HTTP module crawled out of the data can not be GBK parsing, Therefore, this article is mainly to solve the problem of using node to encode the Web site as a gb2312 crawler when it gets garbled.

1. Using the tool: Webstorm,node development artifact, highly recommended

2. Think again: First on the News list page crawler, and then catch the link to the target page title display and news source statistics, this page after the news statistics to jump to the next page, re-proceed the process.

Note: As for why not to use the "first to target content of an article, and then the" next "to get the address" this way, because, the fire site of the next article and the order of the News list/manual smile Face

3. The code is as follows:

1 varHTTP = require ("http");2 varFS = require ("FS");3 varCheerio = require ("Cheerio");4 varCharSet = require ("Superagent-charset");5 varAgent = require ("superagent");6CharSet (agent);// 7 8 varobj = {};9 Ten varpage = 1;//Start Page number One varMAXPAGE = 38;//End Page Number A  - varnum = 0;//number of record strips -  the varurl = "Http://www.cqfire.com/xxzx/news.asp?class1=%D0%C2%CE%C5%D6%D0%D0%C4&class2=%CA%D0%C4%DA%D0%C2%CE%C5"; -  -Startrequest (url + "&pageno=" + page, 0); - functionstartrequest (site, flag) { +     varhtml = ' '; -     varResstr = ' '; +Agent.get (site). CharSet (' GBK '). End (Err, res) = { AHTML =Res.text; at         var$ = cheerio.load (HTML);//parsing HTML using the Cheerio module -  -         if(flag = = 0){ -             //If flag is 0, the page must be parsed for the list page -             varEles = $ ("a"). Not (". Nav_menu"). Not (". Left_menu_class"). Not (". Copy_menu").); -              for(vari = 0; i < eles.length; i + +){ in                 //This method will extract the URL of a in the incoming flag of 1 -                 vartarget = "http://www.cqfire.com/" + eles.eq (i). attr ("href"); toStartrequest (Target, 1); +             } -  the             if(Page <MAXPAGE) { *                 //if the maximum number of pages is not reached, proceed to the next page with an incoming flag of 0 $Page + +;Panax NotoginsengConsole.log (url + "&pageno=" +page); -Startrequest (url + "&pageno=" + page, 0); the             } +}Else{ A             //if flag is 1, it is a specific news page and needs to be extracted from the title and source the             //Get news Headlines +             vartitle = $ ("span". STYLE2 "). Text (). Trim (); -             //Get news sources $             varOrigin = $ ("span". STYLE2 "). Parent (). Parent (). Parent (). Parent ().". "(" td[align= ' Middle ' ").). Text (). Trim (); $             varFrom = Origin.split ("") [0].split (":") [1]; -  -num++;//NUM Indicates the number of current news articles theConsole.log (num +--+)title); - Wuyi             //the source is key and the count is value in the result object the             if(!Obj[from]) { -Obj[from] = 0; Wu             } -Obj[from] + = 1; About  $              for(varKeyinchobj) { -Resstr + = key + ' \ t ' + obj[key] + ' \ n '; -             } -             //The result is stored as a string in txt, where the synchronization method is used, otherwise the output will appear a lot of NULL, but the TXT document is consistent with the statistical results in the synchronization method, it is not puzzling AFs.writefilesync ('./data/result.txt ', resstr, ' Utf-8 ',function(err) { + Console.log (err); the             }) -         } $  the     }) the  the}

4. Using the Superagent,superagent-charset two plug-in, the 21st line using the CharSet () method, you can also not pass in the parameters to express the automatic detection of the encoding of the Web page;

5. Using the Cheerio package to parse the DOM structure of the target Web page, the method of obtaining the target content is consistent with the jquery method.

Summarize:

The key point of using node for crawling is three points:

1. How to get the "next page" address, the Web page used in this method China PageNo parameters, which need to find the pattern after the reptile mode

2. How to get the target content on the target page, this need to observe the structure of the document, the Cherrio package used in this method, the method of obtaining the target content is consistent with jquery, and is also the mainstream choice of node crawler.

3. How to end the recursion, this method is to determine the maximum number of pages, you can also set the maximum numbers, you can also manually end recursion, but pay attention to the well-crawled data records

Node crawler solves Web page encoding as gb2312 result for garbled method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.