Google's official blog published a proposal for making Ajax-based sites crawler. Google believes that the content of web 69% is based on Ajax and seriously affects search.
Although the search engine can now obtain Ajax content through analyzing JS scripts, it is too time-consuming and inefficient, So Google has proposed a new solution, you want to use the headless browser technology on the Web server to return the HTML content that AJAX executes in the browser to the crawler, and add it through fragment! The special URL (such as http://example.com/page? Query # state => http://example.com/page? Query #! State) to identify the Ajax content and display it in the search results (Google calls it the pretty URL). crawlers use the Special Fragment segment (#! State) ing to http://example.com/page? Query & escaped_fragment _ = State (uugly URL called by Google) to obtain Ajax content from the server. The process is as follows:
However, because the Web server is required to undertake the Ajax script execution function, it will undoubtedly burden the Ajax site, and this solution is still under discussion: webmaster help Forum. For more details, see: smx East Ajax proposal