Ajax, that is, asychronous JavaScript and XML. Due to the JavaScript-driven asynchronous request/response mechanism, crawlers in the past lack the semantic understanding of JavaScript, basically, it is impossible to trigger JavaScript asynchronous calls and parse the returned asynchronous callback logic and content.
In addition, in Ajax applications, JavaScript changes the DOM structure significantly, and even all the content on the page is read directly from the server and dynamically drawn through JavaScript. this is simply incomprehensible for static pages that "Get used to" the DOM structure remains unchanged.
From this we can see that previous crawlers were protocol-driven. For Ajax technologies, the "crawler" engine must be event-driven. To implement event-driven, we must first solve the following problems:
● Interactive Analysis and Interpretation of JavaScript
● Distribution of DOM event processing and interpretation
● Extraction of dynamic Dom content Semantics
As for how to implement it, I personally think that the paper crawling Ajax-Driven Web 2.0 applications is of great reference value. You can study it if you are interested.