Using JavaScript to access cross-domain pages on the front-end often uses Ajax, and the backend node. JS is much easier to crawl web information.
Here is one of the simplest examples of crawling my blog home page information and displaying the homepage blog title.
1 varHTTP = require (' http ')2 varCheerio = require (' Cheerio '))3 4 varurl = ' http://www.cnblogs.com/feitan/'5 6 functionFilterhtml (HTML) {//working with the DOM using Cheerio7 var$ =cheerio.load (HTML)8 9 vardata = []Ten varTitles = $ ('. Posttitle a ') OneTitles.each (function(){ AData.push ($ ( This). Text ()) - }) - the returnData - } - - functionOutputinfo (data) {//the data is further transmitted, where the direct output +Data.foreach (function(title) { -Console.log (' \ n ' +title) + }) A at } - -Http.get (URL,function(RES) {//HTTP Module Get method access URL resource - varhtml = ' ' -Res.on (' Data ',function(data) { -html+=Data in }) - toRes.on (' End ',function(){ + vardata =filterhtml (HTML) - outputinfo (data) the }) *}). On (' Error ',function(){ $Console.log (' Error getting data '))Panax Notoginseng})
21 row Specifies a URL resource that initiates a GET request, and the callback function handles the response object Response,response returns an HTML document.
The Cheerio package is used for DOM processing, which resembles jquery's encapsulation of javascriptdom operations in a similar way.
Results:
node. JS's web crawler