Because of business needs, so there will be some crawler design needs.
At present, this part of the content is outsourced projects, the leader said the need for the actual situation, research on the possibility of their own research and development.
But most of these OTA sites do a lot of asynchronous loading, and the interface is encrypted.
In other words, the request data that we intercept from the console is encrypted, and only after the page client parses the data can we actually render the page we want.
Therefore, in order to be able to crawl this data correctly, we must use the browser kernel to implement.
The browser kernel tools that are commonly used now are:
Qtwebkit Spynner
Selenium but because selenium needs to turn on local browser support, it is not suitable for large-scale crawls.
PHANTOMJS (WebKit) http://phantomjs.org
Slimerjs (Gecko) https://slimerjs.org
In addition to selenium, tests were done, and Qtwebkit and Slimerjs were able to render the page successfully.
PHANTOMJS, however, is stuck in a state where the asynchronous load cannot be completed.
The code for
Slimerjs can use exactly the same code as PHANTOMJS because the interface is fully compatible.
Function render (page) {page.render (' qunar.png ');p hantom.exit ();} Var page = require (' webpage '). Create (); Var system = require (' system '); page.onresourcerequested = function (Request) {// can filter some unnecessary requests // system.stderr.writeline (' = onresourcerequested () '); // System.stderr.writeLine (' request: ' + json.stringify (request, undefined, 4);}; Page.open (' *ar.com/city/guangzhou/#fromDate =2015-10-01&todate=2015-10-02&fom=qunarindex ', function (status) {var title = page.evaluate (function () {return document.title;}); Console.log (' page title is ' + title); Console.log ("status: " + status) The //page returns the status if (status === "Success") {// settimeout (Render,1000,page); Console.log ("I am ready! ");} }); Var t = 3;var interVal = setinterval (function () { if ( t > 0 ) { console.log (t--); } else { var htmlcontent = page.evaluate (function ( ) { return document.documentElement.outerHTML; console.log (htmlcontent); page.render ("Qunar.png"); phantom.exit (); }}, 1000);
At present this is the native PHANTOMJS blindly supported code, by waiting for a certain time, to complete the page Ajax resource loading.
Next is the Casperjs-based parsing scheme
Finally, there are still a lot of problems to be solved.
PHANTOMJS This engine data persistence problem.
WebKit the reason the kernel cannot generate a page, even if the test Ctrip is appropriate, the request fails directly.
Var casper = require (' Casper '). Create ({verbose: true,oglevel: ' Debug ', //~~ Add debug Parameter ~~useragent: ' mozilla/5.0 (macintosh; intel mac os x 10_10_4) AppleWebKit/537.36 (Khtml, like gecko) chrome/45.0.2454.93 safari/537.36 ',});//monitor requested message casper.on (' resource.requested ', function (RequestData, request) {// filter out some unnecessary load if (RequestData.url.match (/google|gstatic|doubleclick/)) by regular. { Request.abort (); return;} Else{this.echo (Requestdata.url);}}); Casper.start (' *nar.com/city/taipei/', function () {this.echo ("Strat -2before");// This.scrolltobottom ();//scroll to the bottom of the page this.waitforselector ('. Hotel_price ', function () { //wait until the '. Tweet-row ' selector matches the element that appears and then executes the callback function this.capTureselector (' qunar.png ', ' html '); //the function that was called when successful, give the entire page }, function () { this.capture (' qunar.png '); this.die (' Timeout reached. fail whale? '). Exit (); //the function that is called when the failure occurs, outputs a message, and exits }, 5000); //timeout time, two seconds after the specified selector has not appeared, even if the failure }); Casper.run ();
For now, using SLIMERJS can actually crawl to the desired data content.
Crawling OTA data using the browser kernel