What is the terminal user response time?
As in part 1 of this series
As described in, the end user response time is the time from when the user triggers a page request to when the page is fully displayed. It is also called the browser response time. The end user response time is the end user's
Intuitive experience on the performance of an application. It consists of three parts:
- Page request and download time (referred to as the page download time ).
- Server Response Time.
- Browser processing and rendering time.
It is represented by a formula:
End user response time
=Page download time
+Server Response Time
+
Browser processing and rendering time
Page request and download time are the second blind spots ignored. That is, the impact of network transmission on response time in complex network conditions is missing. Real users are likely to use different network environments (such
For example, Internet cafes) access an Internet application. For the actual reason, performance measurement software often accesses Internet applications in the lab.
In a real production environment, there are various network environments. Performance Engineering needs to simulate the actual production environment as much as possible, but it also needs to be done at an acceptable cost to cover all network environment measurements
The method is not feasible. However, if you can download the page at 2.0
The application's network transmission behavior and network parameter relationship modeling may predict the response time in the real network environment through laboratory performance data.
Back to Top
Model of page download time
Simplified browser Response Time Calculation Model
Browser response time = Server Response Time + page loading time + page rendering time
Page loading time = page size/network bandwidth + (network latency × HTTP request count)/concurrency
The page download time model involves three aspects:
- Internet application's network transmission behavior: it includes the number of HTTP requests sent by the Internet application through a browser, the page size (HTTP
Total response), how many HTTP connections are there at the same time, how the Internet application uses these HTTP connections concurrently, and so on.
- Network parameters: the network parameters can generally be described using QoS (Quality of Service ).
Including bandwidth, latency, packet loss rate, and so on. However, to simplify the process, bandwidth and network latency are the main factors.
- Page download time: the end user response time refers to the amount of time spent on network transmission when a page request is triggered by this user to be fully displayed on this page.
Relationship between network bandwidth and page size
In these relationships, the relationship between page size and bandwidth is simple and clear. The network bandwidth determines the amount of data that can be transmitted per second in the network. Files of the same size, in a low-bandwidth network environment
The transmission time will be greater than the network environment with high bandwidth. Therefore:
Time consumed on bandwidth = page size/network bandwidth
For example, if you download the 1 Mbyte page on the 512 kbps ADSL line, the consumed time is 1 MByte/512 kbps.
= 1024*1024*8 bit/512*1024 BPS = 16 seconds.
Note: The page size here is slightly larger than the total size of the page file, because the entire HTTP protocol stack (HTTP, TCP, ip) has header data.
Relationship between the number of HTTP requests and network latency:
The relationship between the number of HTTP requests and network latency is not so clear. Let's first give an example: a company in Shanghai wants to transport 100 from Beijing
How long does it take for a standard container to ship all the containers to Shanghai?
- First of all, to transport the 100 containers back and forth, we need to run 100 trains from Shanghai to Beijing.
- Second, it takes quite some time from Shanghai to Beijing. This is mainly related to the spatial distance, the speed of the car, and the time of rest.
- Finally, if there is only one vehicle, the vehicle will need to run 100 times. If it takes two days to run a round-trip operation, it is required to run 100 containers.
200 days. If you have two vehicles, you can run them together. It takes only 100 days and 50 days for four vehicles.
The relationship between the number of HTTP requests and network latency is the same as in the preceding example:
- First, if the Internet application sends 100 HTTP requests, this is equivalent to running 100 trains in this example.
- Second, there is latency in the network. This is equivalent to a round-trip time in Beijing and Shanghai in the above example. It is also related to speed and spatial distance. The speed can be assumed to be the speed of light. The speed of light sounds fast
But it takes 0.13 yuan to get back and forth between China and the United States.
Seconds. Of course, the overhead of the gateway on the network is also considerable, which is similar to the above example where the driver needs to rest and the car needs to refuel. Network latency is measured using a simple operating system command. This command is
Ping, the average Ping value can be simply considered as network latency.
- Finally, does the number of HTTP requests have the concept of "how many cars? Yes, this is the maximum number of HTTP connections supported by the browser. The current browser basically supports 2
More than one HTTP connection count. That is not to say that an end user in China accesses an Internet application in the United States and sends a total of 100 HTTP requests. The browser supports two
For HTTP connections, does it only take 0.13 × (100/2) = 6.5 seconds for network latency? It's not that easy. This involves the concurrency and
The relationship between the number of HTTP connections.
Internet application concurrency and HTTP
Connections:
The difference between Internet applications and the above example is that, in the example, it is determined that there are 100
Containers to be transported. Therefore, the two vehicles can be used in full concurrency. In Internet applications, there are several containers to be transported at the beginning, which is gradually clear during the transportation process. A more accurate example
Yes:
- The company in Shanghai knows that there is a container to be transported in Beijing and a car is dispatched. (Two cars only use one, with no concurrency)
- After the car came back, it also brought back a message that there was another container to be transported in Beijing. Then I sent another car. (Two cars only use one, with no concurrency)
- After the car came back, the message was that there were two containers in Beijing to be transported, so two vehicles were dispatched. (The two cars are used, and concurrency occurs)
- When the two vehicles came back, the message was that there was another container in Beijing to be transported, and a car was dispatched. (Two cars only use one, with no concurrency)
- So forth until the back car says there are no more containers. At this time, assume that there are a total of 100 vehicles. The two vehicles run 30 times at the same time and 40 times at the same time.
Times. The total usage is 30 × 2 + 40 × 2 = 140 days. If no concurrency exists, 100x2 = 200 is required.
Day. Then we can set the concurrency to 200/140 = 1.43
- If it is exactly the same, but it only takes four days to move the destination from Beijing to Harbin. Then the application concurrency can be easily calculated to meet the needs of shipping all containers.
(Total number of trips × time for a single round trip/concurrency) = 100 × 4/1.43 = 280 days.
The relationship between Internet application concurrency and HTTP connections is similar to this example:
- When an HTTP request is returned, the browser analyzes and downloads several images, CSS and JavaScript
File. In this case, the browser can concurrently download these files.
- However, after downloading JavaScript A. JS, A. js references B. js and B. js and then finds that it references
C. js. In this case, the browser cannot download the file in serial mode. In Web 2.0 applications, this situation is quite common.
From the above analysis, we can see that the relationship between network latency, HTTP request count, concurrency, and total latency consumed is:
Total latency = network latency X number of HTTP requests/concurrency
The simplified model of page download time is:
Page download time = Time consumed on bandwidth + time consumed on network latency = page size/network bandwidth + (network latency ×
HTTP request count)/concurrency
In this simplified model, some factors are not taken into account:
- DNS query time;
- HTTP request creation time;
- Maintain the HTTP connection status;
- The time consumed by the browser and server during transmission, and so on;
- The impact of bandwidth on concurrency.
Based on this model, we only need to measure the page size, the number of HTTP requests, and the concurrency. we can infer the page download time under different network conditions.
Measurement of network transmission behavior of Internet applications and response time of end users
Inter:
As mentioned above:
End user response time
=Page download time
+Server Response Time
+
Browser processing and rendering time
Internet application network transmission behavior mainly includes the following aspects:
- Page size
- HTTP request count
- Concurrency
As you can imagine, if we can monitor network transmission, we can know the page size and HTTP
The number of requests, and the concurrency can also be obtained through some analysis. In fact, we can also analyze the server response time and browser rendering time through the Internet application network transmission behavior. And:
End user response time = page download time + Server Response Time + browser processing and rendering time = page size/network bandwidth +
(Network latency × HTTP requests)/concurrency + Server Response Time + browser processing and rendering time
Therefore, if we can obtain the page size, HTTP
The number of requests, concurrency, server response time, and browser processing and rendering time can be used to determine the end user response time of the application in any network environment.
There are many network listening tools, such as Wireshark. However, Wireshark
Compared to the underlying layer, it can capture the information of the entire protocol stack. However, its user interface is not very friendly, which is not conducive to the interpretation and description of this article. So this article uses IBM pagedetailer pro
Take the home page of Lotus mashups as an example to describe how to measure the network transmission behavior of Internet applications and how to understand it.
How to Reduce the page download time
The preceding analysis shows that you can reduce the page download time in the following ways:
- Reduce the number of HTTP requests to reduce the consumption of network latency. Methods:
- Cache HTTP requests as much as possible. This can at least reduce the download time of subsequent access pages.
- Bind and integrate multiple files. For example, Dojo shrinksafe can integrate multiple JavaScript files into one. CSS Sprite
Multiple images can be integrated into one.
- Of course, it is always effective to reduce the number of HTTP requests of Internet applications by designing/code restructuring methods.
- Reduce page size to reduce network bandwidth consumption. Methods:
- Cache HTTP requests as much as possible. This can at least reduce the download time of subsequent access pages.
- Start HTTP compression. Mainstream browsers support gzip enconding. By starting HTTP compression on the server side, you can reduce
A text size of more than 60%.
- Of course, it is always effective to reduce the page size of Internet applications by designing/reorganizing code.
- Increase the concurrency rate. Methods:
- Pre-read. For example, A. js will reference B. JS, and B. js will reference C. js. Normally, this is a serial process. However, if you reference A. js
When B. js and C. js are referenced, The concurrency can be improved. And reduce the consumption of network latency.