Performance optimization methods that can be attempted for Ajax without refreshing new pages

Source: Internet
Author: User

Ajax does not refresh new pages. It is already a familiar thing, probably because there is a js method on the Web Front-end page to request the paging data interface on the server through Ajax, after obtaining the data, create an HTML structure on the page and display it to the user, similar to the following:

View sourceprint?
01 <SCRIPT type = "text/JavaScript">
02 Function Getpage (pageindex ){
03 Ajax ({
04 URL:"Remoteinterface. cgi",
05 Method:"Get",
06 Data: {pageindex: pageindex },
07 Callback: callback
08 });
09 }
10 Function Callback (datalist ){
11 // Todo: Create an HTML structure based on the returned datalist data and present it to the user.
12 }
13 </SCRIPT>

CodeSegment 1

Among them, remoteinterface. cgi is a server-side interface. We have limited space here, and the involved instance Code may not be complete, just to express the meaning clearly.

On the UI, there may be paging controls of various styles, and everyone is familiar with them, such:

However, the getpage (pageindex) method is triggered by clicking the control. The getpage method may not be that simple.

If you follow code snippet 1, we can imagine that every time you click to flip the page, you will request a remoteinterface. CGI, except for the first time, getpage (1), getpage (2), getpage (3) the remote interface requests triggered and the round-trip data traffic on the network are actually repeated and unnecessary. During the first request of each page, the data can be cached on the page in some form. If you are interested in looking back at the previous pages, the getpage method should first check whether the local cache contains the data on this page. If yes, it will be displayed to the user again instead of calling the remote interface. According to this idea, we can modify code snippet 1 as follows:

View sourceprint?
01 <SCRIPT type = "text/JavaScript">
02 VaR Pagedatalist = {};
03 Function Getpage (pageindex ){
04 If(Pagedatalist [pageindex]) {// If the local data list contains the data of the current requested page number
05 Showpage (pagedatalist [pageindex])// Directly display the current data
06 }
07 Else
08 {
09 Ajax ({
10 URL: "remoteinterface. cgi,
11 Method:"Get",
12 Data: {pageindex: pageindex },
13 Callback: callback
14 });
15 }
16 }
17 Function Callback (pageindex, datalist ){
18 Pagedatalist [pageindex] = datalist;// Cache data
19 Showpage (datalist );// Display data
20 }
21 Function Showpage (datalist ){
22 // Todo: Create an HTML structure based on the returned datalist data and present it to the user.
23 }
24 </SCRIPT>

Code snippet 2

In this way, the network request round-trip time is reduced. More importantly, the valuable network traffic is saved and the burden on the interface server is reduced. In a low-speed environment or when the operation pressure on the interface server is relatively high, this necessary improvement can better show obvious optimization results. The first of 34 well-known Yahoo websites is to minimize the number of HTTP requests. Ajax asynchronous requests are undoubtedly within the scope of HTTP requests. A Web application with a small access volume may not feel necessary, but imagine that if there is a page with 10 million access requests per day, the user will go over 5 pages on average, with one page being repeat. For such a page, code segment 1 triggers 50 million data requests every day on average, while code segment 2 can reduce at least 10 million requests every day. If the data size of each request is 20 KB, you can save 10 million * 20 KB = 200,000,000 kb about GB of network traffic. In this way, the resources saved are considerable.

If you want to go further, the data caching method in code segment 2 is worth discussing. We previously assumed that the timeliness of paging data could be ignored, but in actual application, the timeliness was unavoidable. The cache will undoubtedly reduce the timeliness. The real cache solution should also rely on the analysis and trade-off of the application's timeliness requirements.

For content that is not particularly time-sensitive, the cache on the page should still be acceptable. As a result, the user will not stay on the same page until there is a jump between pages, causing re-loading, you can obtain the updated data. In addition, if you have the habit of refreshing a page, you can choose to refresh the page when you want to see whether the list has data updates. If you want perfection, you can also consider setting a time range, for example, 5 minutes. If the user stays on the current page for more than five minutes, the page turning in five minutes will first read the cache on the page, and the page turning will be requested again after five minutes.

In some cases, if we can predict the data update frequency, such as how many days a data update may occur, we can even consider using local storage to trigger a request to the server data at a certain time, this saves the number of requests and traffic more thoroughly. Of course, what kind of caching method will be applied in the final analysis depends on the product's requirements for timeliness, but the principle is to save requests and traffic as much as possible, especially for pages with a large access volume.

Does cache not apply to a type of data with high timeliness requirements? Of course not, but the overall thinking has to be changed. Generally, the so-called changes may mainly involve increasing, decreasing, or modifying the data in the list, but the vast majority of the data remains unchanged. In most cases, the preceding settings are applicable for caching within a period of time.

If you want to update data in real time, you may easily think of using a timer. For example, you can execute the getpage (pageindex) method every 20 seconds and redraw the list. However, as long as you think of the previous 10 million page access assumptions, you will find that this is undoubtedly a terrible thing. According to the traffic volume and retry frequency, the server is under great pressure. For how to deal with this situation, I would like to take a look at how Gmail, 163 mailbox and Sina mail handle the mail list page. They almost satisfy our previous assumptions: extremely large daily traffic volumes and real-time updating of data requirements. Using the network packet capture tool for analysis, it is not difficult to find that when users repeatedly request data on the same page number, they will not send requests to the server. To ensure that users can be notified and update the email list in a timely manner when a message is updated, a scheduled and repeated asynchronous request can be used. However, this request only performs a status query, rather than refreshing the list. A request is initiated to obtain the updated data only when the status of the message is updated, or the status query interface directly returns the updated data when an update is found. In fact, the time interval for querying the status of the 163 mailbox is long, which is about two minutes. The time interval for Sina mail is longer, which is about five minutes, we can understand that they are all trying to reduce the number of requests. However, this kind of processing method may not be done only by the front-end. The implementation scheme must be considered as a whole with the background interface.

Now let's look back at the data cache method in code snippet 2. Now we will not discuss the number of requests and traffic savings. Let's take a look at the implementation of the front-end. According to the processing method indicated by code snippet 2, the raw data is stored. When the call is made again, showpage (datalist) needs to reconstruct the HTML structure based on the data and present it to the user, however, we have previously created this structure. Can we consider directly storing this structure when we first create a structure? This can reduce repeated JS computing, especially when the structure is complex. Think about it again. This structure was previously created on the page. It is also resource-consuming to destroy and re-create a new structure during page turning. Can you create a structure for the first time, when turning pages, do not destroy them, but hide them by controlling the CSS style. When repeating pages, do they only control the display or hiding of each other between the created structures?

Finally, the methods discussed here do not necessarily apply to all scenarios, but may be somewhat enlightening. You can try one or two of them when appropriate. At the same time, if the idea is divergent, or it can not only be applied to refreshing new pages. Here, we will discuss it together.

 

Source:TID-caifu Tong Design Center

Related Article

E-Commerce Solutions

Leverage the same tools powering the Alibaba Ecosystem

Learn more >

Apsara Conference 2019

The Rise of Data Intelligence, September 25th - 27th, Hangzhou, China

Learn more >

Alibaba Cloud Free Trial

Learn and experience the power of Alibaba Cloud with a free trial worth $300-1200 USD

Learn more >

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.