Architecture High-performance Web site tips (i)--understanding metrics to measure website performance

Source: Internet
Author: User
Tags response code website performance

How does the server send data?
    1. The server program writes the data that needs to be sent to the program's memory space;
    2. The server program makes system calls to the kernel through the interface of the operating system;
    3. The system kernel copies the data in the user-state memory space into the kernel buffer, then notifies the NIC to fetch it, then the CPU turns to other processing;
    4. The NIC copies the data into the NIC buffer in the CPU-specified kernel buffer;
    5. The NIC converts bytes into bits and then outputs it to the network as an electrical signal.

Note: the copy of the data inside the computer is copied according to the width of the bus. For example, in a 32-bit operating system, the data is duplicated 32 bits at a time.
The bus is like a 32/64-lane road, where data is stored as 0/1 in the computer, and only one 0/1 per lane can be copied each time, so only 32 0/1 can be duplicated at a time.

The speed of the data in the network cable

The network transmission medium has the optical cable and the copper cable, the transmission speed of the electrical signal in the optical cable is 2.3x10^8m/s, in the copper cable transmission speed is 2.0x10^8m/s.
The propagation speed of light is 3.0x10^8m/s, but because the optical cable uses reflection mechanism to spread, it is not direct, so the actual distance of the electric signal is much longer than the straight line, so the transmission speed in the optical cable is only 2.0x10^8m/s.

What is bandwidth? Definition of bandwidth

Definition of bandwidth: the rate at which data is sent.

Unit of bandwidth

100Mbps = 100M bit per second
The 100M bandwidth is usually referred to as 100M bits per second,
100Mbps = 12.5MBps

Note: What we call "100M" refers to 100MB, while the bandwidth is in MB, and 1MB = 8Mb. Therefore, the operator said the "Hundred trillion broadband" is actually "12.5 trillion broadband", hehe.

What affects the data transmission speed (bandwidth)?
    1. The speed at which the data is sent is determined by the receiver's reception speed. In the data link layer, in order to ensure that the data is not lost during the reception process, the receiver tells the sender that the current sending speed is reasonable. If the receiver is too late to collect, it will tell the sender, let it slow dots. Therefore, the speed at which the data is sent (that is, the bandwidth) is determined by the receiver's reception speed.
    2. is related to the degree of parallelism of the propagation medium. The transmission medium can be regarded as a multi-lane road, the data is composed of 0/1, each lane can only hold one 0/1 at a time. Therefore, if the lane of the road increases, then 0/1 of each transmission will increase, thereby increasing the transmission speed (that is, bandwidth).

Why should operators limit bandwidth?

Our server will be connected to the Internet through a switch, the Internet is composed of countless routers and hosts, the router is responsible for the storage and forwarding of packets, the packet according to the destination address route one router, the final delivery to the destination host.

Because a switch often has multiple server access, the server sends the data that needs to be sent first to the switch, which is then sent to the router, which is stored in the router's cache, and the routers are forwarded individually according to the order. Therefore, if the server sends the data too fast, the router cache is full, then the data will be lost, so you need to limit the speed of the server to send data to the router, that is, limit the bandwidth of the server. This restriction is done by the switch to the access server. The above shows that the switch can limit the sending speed of the server as long as the receiving speed is controlled.

What is shared bandwidth? What is exclusive bandwidth?

1. Exclusive Bandwidth
If the egress bandwidth of a router is 100Mbps and there are 10 hosts in the same broadcast domain, the switch will limit the maximum egress bandwidth per host to 10Mbps, regardless of the maximum egress bandwidth of each host in the case of 10Mbps. This is the exclusive bandwidth. Exclusive bandwidth is not affected by other hosts in the same broadcast domain, and the maximum egress bandwidth at any time is 10Mbps.

2. Shared Bandwidth
Assume that the egress bandwidth of a router is still 100Mbps, but the operator in order to make more money, so that more than 10 hosts in the same broadcast domain access, then the average maximum bandwidth per host is less than 10Mbps, even if the switch will still limit the maximum egress bandwidth per host to 10Mbps, But when the host has a large network communication, there is no guarantee that each host has 10Mbps maximum bandwidth, this time will compete with each other bandwidth.

To sum up, the exclusive 10M bandwidth ensures that the maximum egress bandwidth of the server is 10Mbps in all cases and is not affected by other hosts in the same broadcast domain, while the shared 10M bandwidth can only guarantee the maximum 10Mbps egress bandwidth when other hosts in the same broadcast domain are idle.

What is response time?

Response time is the time that the first 0/1 of a packet leaves the server and the last 0/1 is received by the client.

Response time = Send time + transfer time + processing time

    • Send time: Starting from the first 0/1 of the sending packet, to the time when the last 0/1 is sent.
      Send time = packet bits/bandwidth
    • Transmission time: The transmission time of the data in the communication line.
      Transmission time = transmission distance/transmission speed
      (transmission speed is approximately 2x10^8m/s)
    • Processing time: The time at which data is stored for forwarding in each router.
      Processing time is more difficult to calculate.

Response time = (packet bits/bandwidth) + (transmission distance/transfer speed) + processing time

Download speed = number of bytes/response time of data

What is throughput rate?

Throughput rate: The number of requests processed in the server unit time.
Unit: REQS/S

The throughput rate is used to measure the server's ability to process requests.

The throughput rate is not high when the request is very small, because the performance of the server is not reflected at this time. As requests increase, throughput increases, but when the number of concurrent requests rises to a certain point, the throughput rate does not rise. That critical point is the maximum server throughput rate, also called the maximum throughput.

If our website has promotional activities, the above method can be used to estimate the maximum throughput rate of the server, so as to determine whether the server can withstand the pressure of the promotion.

What is the concurrency number? What is the number of concurrent users?

To figure out the difference between concurrent and concurrent users, you need to understand the HTTP protocol first.

The HTTP protocol is an application-layer protocol, which itself is disconnected, that is, the client and the server need to disconnect each time the data is completed, and the connection is re-established the next time the communication occurs. However, there is a keep-alive field in HTTP1.1, which allows both parties to maintain a certain length of connection after a communication is completed. If the client wants to communicate with the server at that time, then no new connections need to be created, just reuse the connection just now, which can improve the efficiency of communication and reduce the additional overhead.

    • Concurrency: The number of times a client requests a server. Regardless of whether the connection has been created, as long as the client requests the server, even if it is a concurrent number.
    • Concurrent users: Number of TCP connections created. If a browser has created a connection that sends 10 requests to the server, only one concurrent user count is counted.

Note: The browser now supports multi-threading and can establish multiple TCP connections to the server at the same time, so one user can cause multiple concurrent users. So "number of concurrent users" and "number of users" can not be fully equal, this need attention!

Average request wait time and server average request processing time

average Request Wait time: The user clicks a button to complete the time required to load the new page.

average Server request processing time: The server takes out a request from the waiting queue to start, and the time that is required to process the request.

In Summary: the average request processing time is the user's point of view, is used to measure the user experience is good or bad indicators.
and the average server request processing time is to measure the server performance is good or bad indicators, in fact, is the inverse of the throughput rate.

Note: average request wait time and server average request processing time is not proportional to the relationship!
Average Request Wait time = Request Transfer time + request wait time + request processing time
Average server request Processing time = Request Processing time
It can be seen that, in the case of a small number of requests, the browser sent requests without waiting, directly by the server processing, then the request waiting time and the server request processing time is proportional, but when the request is unusually large, the request arrives much faster than the server processing the request, then many requests will be squeezed in the waiting queue , even if the server is capable of processing requests (that is, the average server request processing time is short), the user's wait time is still very long, when the user wait time is not proportional to the server request processing time.

Use Apache bench for stress testing

We use Apache server Apache Bench (AB) to perform stress testing on the site. AB is easy to use, and the key is to launch the test locally directly on the server, so we can get the server processing time that does not include the transfer time. Server processing time can be used to know the performance of the server.

1. Pressure test commands
ab -n100 -c10 http://www.acmcoder.com/index.php
    • -N100: Total Concurrent number
    • -C10: Number of concurrent users
    • http://www.acmcoder.com/index.php: Pages that need to be tested
2. Analysis of test results
Server Software:openresty#服务器软件Server Hostname:www.acmcoder.com#测试的网址Server Port: the #访问的端口号Document Path:/index.php#测试的网页Document Length:162 bytes #HTTP响应信息的正文长度Concurrency level:Ten #并发用户数Time taken forTests1.497209 seconds #测试所花费的时间Complete requests: - #总请求数Failed requests:0 #失败的请求数 (the request for response code not 2xx is recorded by non-2xx responses)Write errors:0non-2XX Responses: - #HTTP响应头中状态码非2xx的响应个数Total Transferred:32400 bytes #总的响应数据长度, including header and body data for HTTP responses, but does not include request data. HTML Transferred:16200 bytes #HTTP响应中正文数据的长度. Requests perSecond:66.79[#/sec] (mean) #吞吐率Time per request:149.721[MS] (mean)#用户平均请求等待时间Time per request:14.972[MS] (mean, across all concurrent requests)#服务器平均请求处理时间Transfer Rate:20.71[kbytes/sec] Received#服务器的数据传输速度 (The data is the server egress bandwidth in the extreme case)Connection Times (MS)minMEAN[+/-SD]Median   MaxConnect: +    $   4.8      $       -Processing: A    $   5.0      $       -Waiting: +    $   4.9      $       -Total:Bayi    the   9.7      the      thePercentage of  theRequests servedwithin aCertain Time(MS) -% the #50% of requests completed within 92 milliseconds   the%98   the% About   the%101   -%107   the% the  98% the   About% the  -% the(Longest request)


How do I choose the URL of the site to be tested?

There may be a lot of URLs for a site, and the processing for each URL is different, and the test results for a URL are not representative. Therefore, we need to select a series of Representative URLs, the weighted average number of test results as a comprehensive performance of the site.

Architecture High-performance Web site tips (i)--understanding metrics to measure website performance

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.